Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

ROHM Develops Ultra-Low-Power, On-Device Learning Edge AI Chip

AI at the edge enables real-time failure prediction without cloud server required

ROHM Semiconductor announced they have developed an on-device learning[1] AI chip (SoC with on-device learning AI accelerator) for edge computer endpoints in the IoT field. The new AI chip utilizes artificial intelligence to predict failures (predictive failure detection) in electronic devices equipped with motors and sensors in real-time with ultra-low power consumption.

Latest AI/NLP Technology Insights: ProcessUnity Acquires AI/NLP Technology from ISMS Solutions

Generally, AI chips perform learning and inferences to achieve artificial intelligence functions, as learning requires that a large amount of data gets captured, compiled into a database, and updated as needed. So, the AI chip that performs learning requires substantial computing power that necessarily consumes a large amount of power. Until now, it has been difficult to develop AI chips that can learn in the field consuming low power for edge computers and endpoints to build an efficient IoT ecosystem.

Based on an ‘on-device learning algorithm’ developed by Professor Matsutani of Keio University, ROHM’s newly developed AI chip mainly consists of an AI accelerator (AI-dedicated hardware circuit) and ROHM’s high-efficiency 8-bit CPU ‘tinyMicon MatisseCORE’. Combining the 20,000-gate ultra-compact AI accelerator with a high-performance CPU enables learning and inference with ultra-low power consumption of just a few tens of mW (1000 times smaller than conventional AI chips capable of learning). This allows real-time failure prediction in a wide range of applications, since ‘anomaly detection results’ (anomaly score) can be output numerically for unknown input data at the site where equipment is installed without involving a cloud server.

Going forward, ROHM plans to incorporate the AI accelerator used in this AI chip into various IC products for motors and sensors. Commercialization is scheduled to start in 2023, with mass production planned in 2024.

Related Posts
1 of 41,052

Professor Hiroki Matsutani, Dept. of Information and Computer Science, Keio University, Japan
“As IoT technologies such as 5G communication and digital twins advance, cloud computing will be required to evolve, but processing all the data on cloud servers is not always the best solution in terms of load, cost, and power consumption. With the ‘on-device learning’ we research and the ‘on-device learning algorithms’ we have developed, we aim to achieve more efficient data processing on the edge side to build a better IoT ecosystem. Through this collaboration, ROHM has shown us the path to commercialization in a cost-effective manner by further advancing on-device learning circuit technology. I expect the prototype AI chip to be incorporated into ROHM’s IC products in the near future.”

AI News: An Investment Into Artificial Intelligence as Daktela Buys Coworkers.ai

tinyMicon MatisseCORE (Matisse: Micro arithmetic unit for tiny size sequencer) is ROHM’s proprietary 8-bit CPU developed for the purpose of making analog ICs more intelligent for the IoT ecosystem. An instruction set optimized for embedded applications, together with the latest compiler technology, delivers fast arithmetic processing in a smaller chip area and program code size. High-reliability applications are also supported, such as those requiring qualification under the ISO 26262 and ASIL-D vehicle functional safety standards, while the proprietary onboard ‘real-time debugging function’ prevents the debugging process from interfering with program operation, allowing debugging to be performed while the application is running.

Detail of ROHM’s AI Chip (SoC with On-Device Learning AI Accelerator)
The prototype AI chip (Prototype Part No. BD15035) is based on an on-device learning algorithm (three-layer neural network AI circuit) developed by Professor Matsutani of Keio University. ROHM downsized the AI circuit from 5 million gates to just 20,000 (0.4% the size) to reconfigure for commercialization as a proprietary AI accelerator (AxlCORE-ODL) controlled by ROHM’s high-efficiency 8-bit CPU tinyMicon MatisseCORE that enables AI learning and inference with ultra-low power consumption of just a few tens of mW. This makes the numerical output of ‘anomaly detection results’ possible for unknown input data patterns (i.e., acceleration, current, brightness, voice) at the site where equipment is installed without involving a cloud server or requiring prior AI learning, allowing real-time failure prediction (detection of predictive failure signs) by onsite AI while keeping cloud server and communication costs low.

For evaluating the AI chip, ROHM offers an evaluation board equipped with Arduino-compatible terminals that can be fitted with an expansion sensor board for connecting to an MCU (Arduino). Wireless communication modules (Wi-Fi and Bluetooth®), along with 64kbit EEPROM memory, are mounted on the board. By connecting units such as sensors and attaching them to the target equipment it will be possible to verify the effects of the AI chip from a display. This evaluation board will be loaned out from ROHM Sales. Please contact ROHM Sales for more information.

Latest Aithority Insights : Got It AI Announces AutoFlows, a Breakthrough Autonomous Conversational AI

 [To share your insights with us, please write to sghosh@martechseries.com] 

Comments are closed.