MIT’s Neuroscientists Establish Relationship Between Human Brain and Next-word ML-based Prediction Models
Neuroscience has become the fundamental foundation for developing machine-learning-based Prediction Models. AI developers now rely on machine learning algorithms to crossover into “predictive brains” using advanced concepts in the cognitive neuroscience field. The latest announcement from MIT correlates how next-word prediction models that are used in search, discovery, and analytics actually resemble language-processing centers in the brain. Tons of research are available online that expands knowledge in the fields of AI and Machine Learning, neuroscience, computing, and telepathy. For example, The Human Brain Project (HBP).
What are Prediction Models?
A human brain is an active center that always thinks ahead and tries to provide a contextual thought-based result for various types of predictions. It is the perfect prediction model and has inspired AI scientists and neuroscientists to build the most advanced form of Artificial Neural Networking (ANN) center for next-word ML-based prediction models. According to MIT, modern AI models for natural language processing (NLP) and computer vision (CV) have become very effective and precise in predicting the next-word that are used in online search engines, recommendation systems, and texting apps. Machine learning-based prediction models are designed to predict specific keywords and phrases before the user/ typist actually completes the whole sequencing. The working of these models is so efficient that MIT neuroscientists think that we are very close to building a cognitive neuroscience center outside of a living human body.
A majority of the modern text typing apps on smartphones and search engines use some of the other forms of next-word predictors.
Nancy Kanwisher of MIT says: “The better the model is at predicting the next word, the more closely it fits the human brain. It’s amazing that the models fit so well, and it very indirectly suggests that maybe what the human language system is doing is predicting what’s going to happen next.”
Nancy is a Walter A. Rosenblith Professor of Cognitive Neuroscience, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines (CBMM), and an author of the new study that explains the correlations between language centers of the human brain and ML-based prediction models.
Nancy Kanwisher, Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of CBMM and MIT’s Artificial Intelligence Laboratory (CSAIL); and Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute, are the senior authors of the study, which appeared last week in the Proceedings of the National Academy of Sciences. Martin Schrimpf, an MIT graduate student who works in CBMM, is the first author of the paper.
How Next-word Prediction Models Work in Real-life Scenario
A human brain trains itself to judge and refine various situations and thoughts based on predictions. A similar concept is applied to develop Artificial Neural Networks using NLP.
The language-based next-word prediction model works on something called GPT-3 (Generative Pre-trained Transformer 3), which when given a prompt from the human side could generate text and phrases similar to what a human would produce in rea-time. MIT researchers studied the GPT-3 model to measure the activity of neural nodes and then compared these with the activities of a human brain. Other models that could be used to design and develop word prediction systems are based on either the N-grams model or Long Short Term Memory (LSTM), in addition to GPT-3 model.
AI scientists have been specifically working on Deep Neural Networks (DNN) to develop the next-word prediction models. In an interview in 2018, Lars Muckli, professor of neuroscience at the Centre for Cognitive Neuroimaging in Glasgow, Scotland said-“The outside world is not in our brain so somehow we need to get something into our brain that is a useful description of what’s happening – and that’s a challenge. You update your predictions (of) the future model that you create in order to cycle through the city without being run over.”
In the MIT study, we found out the real workings related to the deep neural networks and how these networks utilize the computational “nodes” to form connections of varying strength, and layers that pass information between each other in prescribed ways. While we are still far from developing an artificial visual cortex, yet we are very much capable of developing speech, text, and visual recognition models. By comparing the language-processing centers in the brain with artificial next-word prediction models, AI scientists could track and measure behavior and motor activities involved when humans are subjected to listening to stories, reading sentences, and comprehending the meaning of a sentence from just typing one word.
The Next Frontier in Prediction Models using AI ML Algorithms
The study on the existing next-word predictions model could open a totally new avenue for the application of Artificial Intelligence in search, recommendation, and discovery. These models, based on cognitive neuroscience, could enable data scientists and neuroscientists in inventing ML models specifically for individuals inflicted with neuron degenerative disorders such as Alzheimer’s and Parkinson’s. Not only would it help researchers establish a relationship between neuroscience, computing, and AI engineering, but also enable AI labs to utilize these models for personalized predictive models and neural approaches in these conditions.
Next-word models not only throw up neural responses based on one word but also give a hint on human behaviors and activities associated with typing, reading and voice-to-text.
“We found that the models that predict the neural responses well also tend to best predict human behavior responses, in the form of reading times. And then both of these are explained by the model performance on next-word prediction. This triangle really connects everything together,” Schrimpf says.
“A key takeaway from this work is that language processing is a highly constrained problem: The best solutions to it that AI engineers have created end up being similar, as this paper shows, to the solutions found by the evolutionary process that created the human brain. Since the AI network didn’t seek to mimic the brain directly — but does end up looking brain-like — this suggests that, in a sense, a kind of convergent evolution has occurred between AI and nature,” says Daniel Yamins, an assistant professor of psychology and computer science at Stanford University, who was not involved in the study.
The researchers also plan to try to combine these high-performing language models with some computer models Tenenbaum’s lab has previously developed that can perform other kinds of tasks such as constructing perceptual representations of the physical world.
“If we’re able to understand what these language models do and how they can connect to models which do things that are more like perceiving and thinking, then that can give us more integrative models of how things work in the brain. This could take us toward better artificial intelligence models, as well as giving us better models of how more of the brain works and how general intelligence emerges than we’ve had in the past.”
Details of the Background Study
The research was funded by a Takeda Fellowship; the MIT Shoemaker Fellowship; the Semiconductor Research Corporation; the MIT Media Lab Consortia; the MIT Singleton Fellowship; the MIT Presidential Graduate Fellowship; the Friends of the McGovern Institute Fellowship; the MIT Center for Brains, Minds, and Machines, through the National Science Foundation; the National Institutes of Health; MIT’s Department of Brain and Cognitive Sciences; and the McGovern Institute.
Other authors of the paper are Idan Blank Ph.D. ’16 and graduate students Greta Tuckute, Carina Kauf, and Eghbal Hosseini.
Comments are closed.