Applied Brain Research Inc. Shows Nengo Spiking, Real-Time, AI Deep-Learning Networks on Intel Loihi Use 38x Less Energy Than on NVIDIA Quadro K4000 GPU
Applied Brain Research Inc. (ABR) Released Today Benchmarks Showing ABR Nengo DL Deep Networks on the Intel Loihi Research Chip Is 38x More Efficient Than Leading GPU for Real-Time Data Deep-Learning
Applied Brain Research Inc. released a study comparing the energy-efficiency of their Nengo Deep Learning toolkit (Nengo DL) running a real-time, keyword spotting deep learning network on Intel’s Loihi neuromorphic research chip to traditional hardware. The results show that Nengo DL on Loihi uses 38x less energy per inference than an architecturally identical network running on an NVIDIA Quadro K4000 GPU.
The study also compared the dynamic energy cost per inference performance of the same deep network on several other platforms. In each case the Nengo DL on Loihi network consumed significantly less power. Specifically, the NVIDIA Jetson TX1 edge GPU consumed 7.3x more energy, the Intel Xeon E5-2630 CPU 8.2x more and the Movidius Neural Compute Stick 1.9x more.
“These benchmarking results show that Loihi can perform inference on real-time data streams using standard feed-forward deep networks with significant efficiency advantages compared to conventional processor architectures. ABR’s Nengo DL software makes these gains accessible for mainstream use by hiding the complexity of the underlying spiking neural network implementation. This has important implications for the commercialization outlook for this technology.”, stated Mike Davies, Director, Neuromorphic Computing Lab at Intel Corporation.
Dr. Chris Eliasmith, co-CEO of ABR noted that for this application these results indicate that, “Nengo DL on Loihi outperforms all of these alternative platforms on an energy cost per inference basis while maintaining near-equivalent inference accuracy.” Furthermore, an analysis of tradeoffs between network size, inference speed, and energy cost indicates that Loihi’s comparative advantage over other low-power computing devices improves for larger networks.
Computing with artificial spiking neurons directly in software and hardware, known as “neuromorphics,” has long been pursued as a means of exploiting how the brain computes intelligence so efficiently. The lessons learned can be applied to improve artificial intelligence. “In this study, ABR has delivered strong empirical evidence that the long-sought-after efficiencies of computing with spiking neurons now can be realized in commercially valuable applications using Nengo DL on Loihi”, said Peter Suma, co-CEO of ABR.
ABR’s Nengo DL toolkit allows deep learning networks to run on neuromorphic hardware, CPUs and GPUs. This provides one development environment in which to define power-efficient, real-time, neuromorphic networks that can then be run on all supported hardware platforms, including the Intel Loihi neuromorphic research chip. Neuromorphic edge-AI computing with Nengo and Loihi reduces power costs while preserving the performance of deep networks.
Read More: Interview with Megan Kvamme, CEO at FactGem