Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Research Opens New Neural Network Model Pathway to Understanding the Brain

NTT Research Embraces PHI Lab Scientist-led Paper and Academic Initiatives that Set Firmer Foundation for Neuroscientific Models

NTT Research, Inc., a division of NTT, announced that a research scientist in its Physics & Informatics (PHI) Lab, Dr. Hidenori Tanaka, was the lead author on a technical paper that advances basic understanding of biological neural networks in the brain through artificial neural networks. Titled “From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction,” the paper was presented at NeurIPS 2019, a leading machine-learning, artificial intelligence (AI) and computational neuroscience conference, and published in Advances in Neural Information Processing Systems 32 (NIPS 2019). Work on the paper originated at Stanford University, academic home of the paper’s six authors when the research was performed. At the time, a post-doctoral fellow and visiting scholar at Stanford University, Dr. Tanaka joined NTT Research in December 2019. The underlying research aligns with the PHI Lab’s mission to rethink the computer by drawing inspirations from computational principles of neural networks in the brain.

Recommended AI News: TTEC Wins Gold Stevie Award For Innovative Use Of Technology In Human Resources

Research on the paper began through collaboration between the labs of Stanford University Professors Surya Ganguli and Stephen Baccus, two of the paper’s co-authors. Dr. Ganguli, one of four Stanford professors who are lead investigators on collaborative projects with the NTT Research PHI Lab, is an associate professor in the Department of Applied Physics. Dr. Baccus is a professor in the Department of Neurobiology. Co-authors Niru Maheswaranathan, Lane McIntosh and Aran Nayebi were in the Stanford Neurosciences Ph.D. program when the work was performed. Drawing upon previous work on deep learning models of the retinal responses to natural scenes by the co-authors, this NeurIPS paper addressed the fundamental question in modern computational neuroscience of whether successful deep learning models were “simply replacing one complex system (a biological circuit) with another (a deep network), without understanding either.” By combining ideas from theoretical physics and interpretable machine learning, the authors developed a new way to perform model reduction of artificial neural networks that are trained to mimic the experimentally recorded neural response of the retina to natural scenes. The underlying computational mechanisms were consistent with prior scientific literature, thus placing these neuroscientific models on firmer theoretical foundations.

“Because we are working on such a long-range, cross-disciplinary frontier, the work last year by Dr. Tanaka and his colleagues at Stanford is still fresh; moreover, it is particularly relevant to our continued exploration of the space between neuroscience and quantum information science, as the framework presents a new way to extract computational principles from the brain,” said PHI Lab Director Dr. Yoshihisa Yamamoto. “Establishing a solid foundation for neural network models is an important breakthrough, and we look forward to seeing how the research community, our university research partners, Dr. Tanaka and our PHI Lab build upon these insights and advance this work further.”

Recommended AI News: AdPushup Recognized As A Finalist Of Microsoft Partner Of The Year Award

Related Posts
1 of 40,464

To better ground the framework of deep networks as neuroscientific models, the authors of this paper combine modern attribution methods and dimensionality reduction for determining the relative importance of interneurons for specific visual computations. This work analyzes the deep-learning models that were previously shown to reproduce four types of cell responses in the salamander retina: omitted stimulus response (OSR), latency coding, motion reversal response and motion anticipation. The application of the developed model reduction scheme results in simplified, subnetwork models that are consistent with prior mechanistic models, with experimental support in three of the four response types. In the case of OSR, the analysis yields a new mechanistic model and hypothesis that redresses previous inadequacies. In all, the research shows that in the case of the retina, complex models derived from machine learning can not only replicate sensory responses but also generate valid hypotheses about computational mechanisms in the brain.

“Unlike natural systems that physicists usually deal with, our brain is notoriously complicated and sometimes rejects simple mathematical models,” said Dr. Tanaka. “Our paper suggests that we can model the complex brain with complex artificial neural networks, perform model-reduction on those networks and gain intuition and understanding of how the brain operates.”

Recommended AI News: Saudi German Hospital Improves Patient Experience With AI Bots From Automation Anywhere

Dr. Tanaka’s theoretical pursuit in reducing the complexity of artificial neural networks not only advances our scientific understanding of the brain, but also provides engineering solutions to save time, memory and energy in training and deploying deep neural networks. His current research proposes a new pruning algorithm, SynFlow (Iterative Synaptic Flow Pruning), which challenges the existing paradigm that data must be used to quantify which synapses are important. Whereas last year’s paper sought to understand the brain by performing model reduction on the biological neural networks, the new work from this year aims to make deep learning more powerful and efficient by removing parameters from artificial neural networks.

This research plays a role in the PHI Lab’s broader mission to apply fundamental principles of intelligent systems, including our brain, in radically re-designing artificial computers, both classical and quantum. To advance that goal, the PHI Lab has established joint research agreements not only with Stanford but also five additional universities, one government agency and quantum computing software company. The other universities are California Institute of Technology (Caltech), Cornell University, Massachusetts Institute of Technology (MIT), Swinburne University of Technology and the University of Michigan. The government entity is NASA Ames Research Center in Silicon Valley, and the private company is 1Qbit. Taken together, these agreements span research in the fields of quantum physics, brain science and optical technology.

Recommended AI News: IDTechEx Asks If COVID-19 Will Accelerate The Shift Towards Electric Light Commercial Vehicles

Comments are closed, but trackbacks and pingbacks are open.