Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Floadia Develops Memory Technology That Retains Ultra-high-precision Analog Data for Extended Periods

Floadia Corporation, headquartered in Kodaira-shi, Tokyo, has developed a prototype 7-bit-per-cell flash memory chip that can retain analog data for 10 years at 150 degrees Celsius by devising a memory cell structure and control method. With the existing memory cell structure, the problem of characteristic change and variation due to charge leakage was significant, and the data retention was only about 100 seconds.

Recommended AI News: Block Ape Scissors Announce Official Beta Launch Date

PREDICTIONS-SERIES-2022

Related Posts
1 of 40,491

Floadia will apply the memory technology to a chip that realizes AI (artificial intelligence) inference operations with overwhelmingly low power consumption. This chip is based on an architecture called Computing in Memory (CiM), which stores neural network weights in non-volatile memory and executes a large number of multiply-accumulate calculations in parallel by passing current through the memory array. CiM is attracting worldwide attention as an AI accelerator for edge computing environments because it can read a large amount of data from memory and consumes much less power than conventional AI accelerators that perform multiply-accumulate calculations on CPUs and GPUs.

This memory technology is based on SONOS-type flash memory chips developed by Floadia for integration into microcontrollers and other devices. Floadia made numerous innovations such as optimizing the structure of charge-trapping layers, i.e. ONO film, to extend the data retention time when storing 7 bits of data. The combination of two cells can store up to 8 bits of neural network weights, and despite its small chip area, it can achieve a multiply-accumulate calculation performance of 300 TOPS/W, far exceeding that of existing AI accelerators.

Recommended AI News: The Nobility Project Leads the Way When it Comes to BUSD Reflections!

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.