NTT and the University of Tokyo Develop World’s First Optical Computing AI Using an Algorithm Inspired by the Human Brain
NTT Corporation and the University of Tokyo have devised a new learning algorithm inspired by the information processing of the brain that is suitable for multi-layered artificial neural networks (DNN) using analog operations. This breakthrough will lead to a reduction in power consumption and computation time for AI. The results of this development were published in the British scientific journal Nature Communications on December 26th.
Recommended AI News: Complete Verkada Platform is Now Available Across the UK and Europe
What can #AI researchers learn from the human brain? @NTTPR and @UTokyo_News_en have devised an algorithm modeled after the brain’s information processing system that will lower AI’s power consumption and computation time. #OpticalComputing #TechforGood
Researchers achieved the world’s first demonstration of efficiently executed optical DNN learning by applying the algorithm to a DNN that uses optical analog computation, which is expected to enable high-speed, low-power machine learning devices. In addition, they have achieved the world’s highest performance of a multi-layered artificial neural network that uses analog operations.
In the past, high-load learning calculations were performed by digital calculations, but this result proves that it is possible to improve the efficiency of the learning part by using analog calculations. In Deep Neural Network (DNN) technology, a recurrent neural network called deep reservoir computing is calculated by assuming an optical pulse as a neuron and a nonlinear optical ring as a neural network with recursive connections. By re-inputting the output signal to the same optical circuit, the network is artificially deepened.
DNN technology enables advanced artificial intelligence (AI) such as machine translation, autonomous driving and robotics. Currently, the power and computation time required is increasing at a rate that exceeds the growth in the performance of digital computers. DNN technology, which uses analog signal calculations (analog operations), is expected to be a method of realizing high-efficiency and high-speed calculations similar to the neural network of the brain. The collaboration between NTT and the University of Tokyo has developed a new algorithm suitable for an analog operation DNN that does not assume the understanding of the learning parameters included in the DNN.
Recommended AI News: Blotout Announces New Partnership with Fastly to Improve Meta Ad Spend with Blotout’s EdgeTag
The proposed method learns by changing the learning parameters based on the final layer of the network and the nonlinear random transformation of the error of the desired output signal (error signal). This calculation makes it easier to implement analog calculations in things such as optical circuits. It can also be used not only as a model for physical implementation, but also as a cutting-edge model used in applications such as machine translation and various AI models, including the DNN model. This research is expected to contribute to solving emerging problems associated with AI computing, including power consumption and increased calculation time.
Recommended AI News: Raising the Ceiling of Code-Free Programming for Robotics and Industrial Automation
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.