Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Deep-Tech Startup Applied Brain Research Inc. Extends Battery Life with Ultra-Low-Power AI Algorithm

Applied Brain Research (ABR) announce a new Algorithm that enables advances in ultra-low-power AI speech, vision and signal processing systems for always-on and edge-AI applications, extending battery life while making them more accurate.

ABR’s announcement demonstrates the potential to realize ultra-low-power instantiations of a large class of algorithms that learn patterns in data, spanning extraordinarily long intervals of time.


Current algorithms, like Long Short-Term Memories (LSTMs), can learn and predict sequences of data for long periods of time and make it possible for neural networks to learn to process data like speech, video and control signals.

Read More: OptimalPlus Opens German Office and Partners with ZF

Present in most smart speakers and voice recognition systems, LSTMs are said to be the most financially valuable AI algorithm ever invented (Forbes).

Related Posts
1 of 36,758

LSTMs fail when tasked with learning temporal dependencies in signals than span 1,000 time-steps or more, making them very difficult to scale and limit commercial application.

Read More: 10 Books on AI + Machine Learning You Shouldn’t Miss Reading


This new algorithm – the Legendre Memory Unit (LMU) – is a neuromorphic algorithm for continuous-time memory that can learn temporal dependencies over millions of time-steps or more. The algorithm is a new RNN architecture that enables networks of artificial neurons to classify and predict temporal patterns far more efficiently than LSTMs.

  • The LMU is mathematically derived to implement the continuous-time dynamical system that optimally maintains a scale-invariant representation of time.
  • The ABR LMU obtains the best-known results on permuted sequential MNIST, a difficult RNN benchmark, and has been shown to scale to input sequences spanning hundreds of millions of time-steps.
  • The resulting patterns in spiking activity have also been linked to neural “time cells” observed in the striatum and medial prefrontal cortex in mammalian brains.

Unlike the LSTM, the LMU can be implemented using spiking neurons, thus demonstrating an algorithmic advance that is anticipated to provide leaps in efficiency for solutions to dynamical time-series problems using low-power neuromorphic devices.

Read More: Worldwide Spending on AR/VR Expected to Reach $18.8 Billion in 2020, According to IDC

Leave A Reply

Your email address will not be published.