Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Deci Collaborates with Intel to Achieve 11.8x Accelerated Inference Speed at MLPerf

NewThe Deci-Intel collaboration marks a significant step towards enabling deep learning inference at scale on CPUs

Deci, the deep learning company building the next generation of AI, announced its inference results that were submitted to the open division of the MLPerf v0.7 inference benchmark. On several popular Intel CPUs, Deci’s AutoNAC (Automated Neural Architecture Construction) technology accelerated the inference speed of the well-known ResNet-50 neural network. It reduced the submitted models’ latency by a factor of up to 11.8x and increased throughput by up to 11x– all while preserving the model’s accuracy within 1%.

“Billions of dollars have been spent on building dedicated AI chips, some of which are focused on computer vision inference,” says Yonatan Geifman, CEO and co-founder of Deci. “At MLPerf we demonstrated that Deci’s AutoNAC algorithmic acceleration, together with Intel’s OpenVino toolkit, enables the use of standard CPUs for deep learning inference at scale.”

Recommended AI News: Amdocs Once Again Named to Dow Jones Sustainability Index (DJSI) for North America

According to MLPerf rules, Deci’s goal was to reduce the latency, or increase throughput, while staying within 1% accuracy of ResNet-50 trained on the Imagenet dataset. Deci’s optimized model improved latency between 5.16x and 11.8x when compared to vanilla ResNet-50. When compared to competing submissions, Deci achieved throughput per core that was three times higher than models of other submitters.

Related Posts
1 of 12,692

“Intel’s collaboration with Deci takes a significant step towards enabling deep learning inference on CPU, a longstanding challenge for AI practitioners across the globe,” said Guy Boudoukh from Intel AI. “Accelerating the latency of inference by a factor of 11x enables new applications and deep learning inference tasks in a real-time environment on CPU edge devices and dramatically cuts cloud costs for large scale inference scenarios.”

Recommended AI News: Acxiom & Diwo Partnership Goes Beyond Marketing, Delivers Business Optimization Solutions

MLPerf gathers expert deep learning leaders to build fair and useful benchmarks for measuring training and inference performance of ML hardware, software, and services. The models submitted were optimized using Deci’s AutoNAC technology and quantized with Intel’s OpenVINO to 8-bit precision.

Deci’s patent-pending AutoNAC technology uses machine learning to redesign any model and maximize its inference performance on any hardware – all while preserving its accuracy.

Recommended AI News: Unbabel Launches COMET, Blazing New Trail for Ultra-Accurate Machine Translation

Leave A Reply

Your email address will not be published.