Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Enhancing System Architecture Implementation for AI Applications, Microchip Delivers its Analog Embedded SuperFlash Technology

Microchip Technology’s SuperFlash Membrain Neuromorphic Memory Solution Provides Substantial Reduction in Compute Power to Improve AI Inference at the Edge

As artificial intelligence (AI) processing moves from the cloud to the edge of the network, battery powered and deeply embedded devices are challenged to perform AI functions—like computer vision and voice recognition. Microchip Technology Inc., via its Silicon Storage Technology (SST) subsidiary, is addressing this challenge by significantly reducing power with its analog memory technology, the memBrain™ neuromorphic memory solution. Based on its industry proven SuperFlash® technology and optimized to perform vector matrix multiplication (VMM) for neural networks, Microchip’s analog flash memory solution improves system architecture implementation of VMM through an analog in-memory compute approach, enhancing AI inference at the edge.

As current neural net models may require 50M or more synapses (weights) for processing, it becomes challenging to have enough bandwidth for an off-chip DRAM, creating a bottleneck for neural net computing and an increase in overall compute power. In contrast, the memBrain solution stores synaptic weights in the on-chip floating gate—offering significant improvements in system latency. When compared to traditional digital DSP and SRAM/DRAM based approaches, it delivers 10 to 20 times lower power and significantly reduced overall BOM.

Read More: ArcBlock Upgrades Forge Application Framework with New Features and Tools for Blockchain Developers

“As technology providers for the automotive, industrial and consumer markets continue to implement VMM for neural networks, our architecture helps these forward-facing solutions realize power, cost and latency benefits,” said Mark Reiten, vice president of the license division at SST. “Microchip will continue to deliver highly reliable and versatile SuperFlash memory solutions for AI applications.”

Related Posts
1 of 40,570

The memBrain solution is being adopted by today’s companies looking to advance machine learning capacities in edge devices. Due to its ability to significantly reduce power, this analog in-memory compute solution is ideal for any AI application.

Read More: Dragonchain Open Sources Its Blockchain Platform

“Microchip’s memBrain solution enables ultra-low-power in-memory computation for our forthcoming analog neural network processors,” said Kurt Busch, CEO of Syntiant Corp. “Our partnership with Microchip continues to offer Syntiant many critical advantages as we support pervasive machine learning for always-on applications in voice, image and other sensor modalities in edge devices.”

SST will showcase this analog memory solution and present Microchip’s memBrain product tile array-based architecture at the AI/ML session track on flash performance scaling at the 2019 Flash Memory Summit from August 6-8, 2019, at the Santa Clara Convention Center in Santa Clara, California.

 Read More: Blockchain Leader, Steemit, Announces SteemFest 4

1 Comment
  1. Copper scrap customs clearance says

    Copper scrap casting Copper scrap processing technology Scrap metal repackaging
    Copper cable recycling equipment, Scrap metal appraisal, Innovations in Copper scrap recycling

Leave A Reply

Your email address will not be published.