Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Synopsys Introduces New EmbARC ML Inference Software Library for Power-Efficient Neural Networks

Synopsys, Inc. announced availability of the new embARC Machine Learning Inference software library to facilitate development of power-efficient neural network system-on-chip (SoC) designs incorporating Synopsys’ DesignWare® ARC® EM and HS DSP Processor IP. The embARC Machine Learning Inference (MLI) software library provides developers with optimized functions to implement neural network layer types, significantly reducing processor cycle counts for applications that require low power and area, such as voice detection, speech recognition, and sensor data processing. The embARC MLI software library is available through embARC.org, a dedicated website that provides software developers centralized access to free and open source software, drivers, operating systems, and middleware supporting ARC processors.

Read More: Hyundai Mobis Accelerates Global Open Collaboration with Strategic Investment

“To provide our customers with an ultra-low power AI solution for voice triggering and recognition, we need power- and area-efficient processor IP like ARC EM DSP processors,” said Albert Liu, founder and chief executive officer at Kneron. “By offering the embARC Machine Learning Inference software library, Synopsys gives SoC developers the fundamental kernels needed to quickly implement machine learning algorithms on ARC-based designs.”

Related Posts
1 of 41,062

Read More: Newest Veritone aiWARE Enhancements Enable Customers to Expand and Accelerate Their Adoption of AI

The embARC MLI software library supports ARC EMxD and HS4xD processors and provides a set of essential kernels for effective inference of small- or mid-sized machine learning models. It enables the efficient implementation of operations such as convolutions, long short-term memory (LSTM) cells, pooling, activation functions such as rectified linear units (ReLU), and data routing operations, including padding, transposing, and concatenation, while reducing power and memory footprint. As an example, low-power neural network benchmarks such as CIFAR-10 running on an ARC EM9D processor can achieve up to a 4X reduction in cycle count compared to competitive processors in the same class. Additionally, the MLI library provides an average of 3-5X performance improvement across a wide range of neural network layers, such as depth-wise 2D convolution, fully connected, basic RNN cells, and LSTM cells with a maximum performance boost of up to 16X for 2D convolution layers.

“Power consumption and area are critical considerations for embedded machine learning functionality in edge devices,” said John Koeter, vice president of marketing for IP at Synopsys. “By enabling broad classes of neural networks to run on power-efficient ARC EM and HS DSP processors, Synopsys is expanding the set of ARC processors that developers can choose to create their energy-efficient AI designs.”

Read More: Gartner Survey Shows 37 Percent of Service Leaders Are Piloting or Using AI Bots and Virtual Customer Assistants

1 Comment
  1. Copper scrap resale value says

    Copper scrap branding Copper scrap market differentiation Metal reclaiming yard operations
    Copper cable recyclers, Metal trade compliance, Scrap copper granulation

Leave A Reply

Your email address will not be published.