Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

AIStorm’s AI-in-Imager Solutions Use Tower Semiconductor’s Hi-K VIA Capacitor Memory To Enable High-Density Imager, Always-On Processing

AIStorm’s charge domain AI-in-Sensor technology uses Tower Semiconductors Hi-K VIA capacitor memory to enable best-in-class, real-time machine learning for the $7B edge imager market.

AIStorm and Tower Semiconductor announced that AIStorm’s new AI-in-imager products will feature AIStorm’s electron multiplication architecture and Tower’s Hi-K VIA capacitor memory, instead of digital calculations, to perform AI computation at the pixel level. This saves on the silicon real estate, multiple die packaging costs, and power required by competing digital systems—and eliminates the need for input digitization. The Hi-K VIA capacitors reside in the metal layers, allowing the AI to be built directly into the pixel matrix without any compromise on pixel density or size.

Recommended AI News: Accenture and Shiseido Establish Joint Venture to Accelerate Shiseido’s Digital Transformation

“THIS NEW IMAGER TECHNOLOGY OPENS UP A WHOLE NEW AVENUE OF ALWAYS-ON FUNCTIONALITY. INSTEAD OF PERIODICALLY TAKING A PICTURE AND INTERFACING WITH AN EXTERNAL AI PROCESSOR THROUGH COMPLEX DIGITIZATION, TRANSPORT AND MEMORY SCHEMES, AISTORM’S PIXEL MATRIX IS ITSELF THE PROCESSOR & MEMORY. NO OTHER TECHNOLOGY CAN DO THAT”

Always-on imagers draw negligible power in idle mode until they detect specific situations, such as wake-up on, fingerprint, face, action, behavior, gesture, person, or a driving event. In addition, they perform machine learning within the device after wake-up, eliminating the need for additional components or co-packaged processors.

The AI edge market for deep-chip learning is expected to grow from $6.72 billion in 2022 to $66.3 billion by 2025. Next-generation handsets, IoT devices, cameras, laptops, VR, gaming devices, and wearables will directly benefit from AIStorm’s AI-in-sensor technology, which delivers superior cost and performance benefits, when compared to other AI edge solutions on the market.

Related Posts
1 of 41,054

Recommended AI News: ZINFI Launches an Advanced Microsoft Dynamics 365 Connector

“This new imager technology opens up a whole new avenue of always-on functionality. Instead of periodically taking a picture and interfacing with an external AI processor through complex digitization, transport and memory schemes, AIStorm’s pixel matrix is itself the processor & memory. No other technology can do that,” said Dr. Avi Strum, SVP of the sensors and displays business unit at Tower Semiconductor.

In existing solutions, AI processors usually reside outside the pixel matrix, which is why always-on imaging solutions need to continuously detect pixel changes and forward digital information to memory and an AI subsystem located outside the imager. The result is numerous false alerts and high power consumption. Additionally, whether GPU-based or PIM-based, significant silicon area is associated with memory storage, hence their high cost.

AIStorm’s solution is different. Electrons are multiplied directly instead of being converted to a digital number through the use of memory that is suspended above the silicon in the metals, a feature enabled by using Tower Semiconductor’s low-leakage VIA capacitor technology. This capability for local AI pixel coupling adds a new dimension to edge imaging, enabling an immediate, intelligent (AI) response to pixel changes for the first time.

To complement its hardware, AIStorm has built mobile models within the MantisNet & Cheetah product families that use the direct pixel coupling of the AI matrix to offer sub-100uW always-on operation with best-in-class latencies, and post wake-up processing of up to 200 TOPs/W.

Recommended AI News: DecisionNext and IQ Foods Partnership Provides Retail and Food Service Customers More Buying Power

Comments are closed.