Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Flex Logix Discloses Real-World Edge AI Inference Benchmarks Showing Superior Price/Performance For All Models

Flex Logix Technologies, Inc., the leading supplier of embedded FPGA (eFPGA) IP, architecture and software,  announced real-world benchmarks for its InferX X1 edge inference co-processor, showing significant price/performance advantages when compared to Nvidia’s Tesla T4 and Xavier NX when run on actual customer models. These details were presented at today’s Linley Spring Processor Conference by Vinay Mehta, Flex Logix’s inference technical marketing manager.

The InferX X1 has a very small die size: 1/7th the area of Nvidia’s Xavier NX and 1/11th the area of Nvidia’s Tesla T4.  Despite being so much smaller, the InferX X1 has latency for YOLOv3, an open source model that many customers plan to use, similar to Xavier NX. On two real customer models, InferX X1 was much faster, as much as 10x faster in one case.

Recommended AI News: BigPanda Free 90-Day “Ops From Home” Program Helps Businesses Maintain Service Reliability With Remote It Operations

In terms of price/performance as measured by streaming throughput divided by die size, InferX X1 is 2-10x better than Tesla T4 and 10-30x better than Xavier NX.

Related Posts
1 of 40,469

Recommended AI News: Clevertouch Releases Momentum Software for Salesforce Marketing Cloud to Revolutionise Email Workflow

“Customers expect that they can use performance on ResNet-50 to compare alternatives. These benchmarks demonstrate that the relative performance of an inference accelerator on one model does not apply to all models,” said Geoff Tate, CEO and co-founder of Flex Logix. “Customers should really be asking each vendor they evaluate to benchmark the model that they will use to find out the performance they will experience. We are doing this for customers now and welcome more – we can benchmark any neural network model in TensorFlow Lite or ONNX.”

The InferX X1 is completing final design checks and will tape-out soon with sampling expected in Q3 2020 as a chip and as a PCIe board.

Recommended AI News: Esri Announces Online Analytics Platform for Caribbean Community

Comments are closed, but trackbacks and pingbacks are open.