Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

How Neural Networks Can Read Thoughts and Restore Movement to Paralyzed Limbs

While diving into the Atlantic Ocean off the shores of North Carolina with his friends in 2010, Ian Burkhart, then a college student, sustained a devastating spinal cord injury that left him paralyzed from the chest down.

But with a brain-computer interface powered by neural networks, he can now use his right hand to pick up objects, pour liquids and play Guitar Hero.

Ian Burkhart plays a guitar video game at Ohio State University’s Wexner Medical Center, with researcher Nick Annetta looking on. Photo courtesy of Battelle.

A Blackrock Microsystems microchip implanted in Burkhart’s brain connects to a computer running algorithms developed at Battelle. The algorithms interpret his neural activity and send signals to an electrode sleeve on his right hand. The sleeve, also invented at Battelle, stimulates the nerves and muscles in his arm to elicit a specific hand movement.Burkhart is the first participant in a clinical trial led by Ohio State University and Battelle, a nearby independent research and development organization.

Read More:  Why Facial Recognition Providers Must Take Consumer Privacy Seriously

For now, Burkhart can only use the system, called NeuroLife, when in a laboratory located at Ohio State. But the eventual goal is for NeuroLife to become portable enough to mount on the user’s chair for home use.

If people at home could use the NeuroLife system for daily tasks like eating, brushing their teeth and getting dressed, it “would make a big impact on their ability to live independently,” said David Friedenberg, senior research statistician at Battelle and co-author on their latest paper, published in Nature Medicine.

“We want to make it easy enough that the user and their caregiver can set it up,” he said, “where you don’t need a bunch of Ph.D.s and engineers in the room to make it all work.”

Read More:  Jumpstart 2019: Interview With Tracy Malingo, SVP of Product Strategy at Verint

Neural Networks Read Neural Signals

AI is being developed for a wide range of assistive technology tools, from prosthetic hands to better hearing aids. Deep learning models can provide a synthesized voice for individuals with impaired speech, help the blind see, and translate sign language into text.

Related Posts
1 of 40,439

One reason assistive device developers turn to deep learning is because it works well for decoding noisy signals — like electrical activity from the brain.

Using an NVIDIA Quadro GPU, a deep learning neural decoder — the algorithm that translates neural activity into intended command signals — was trained on brain signals from scripted sessions with Burkhart, where he was asked to think about executing specific hand motions. The neural network learned which brain signals corresponded to which desired movements.

However, a key challenge in creating robust neural decoding systems is that brain signals vary from day to day. “If you’re tired on one day, or distracted, that might influence the neural activity patterns that are meant to control the different movements,” said Michael Schwemmer, principal research statistician in Battelle’s advanced analytics group.

Read More:  Interview with Nathaniel Gates, CEO & Co-Founder at Alegion

To recalibrate the neural network, Burkhart must think about moving his hand in specific ways. In this image from September 2018, he’s at work at Ohio State University’s Wexner Medical Center. Photo courtesy of Battelle.

These biweekly sessions generated new brain data, which was used to update two neural networks. One leveraged labeled data for supervised learning, and another used unsupervised learning.So when Burkhart came into the lab twice a week, each session started with a 15- to 30-minute recalibration of the neural decoder — during which he would work through a scripted session, thinking in turn about moving different parts of his hand.

Together, these networks achieved over 90 percent accuracy in decoding Burkhart’s brain signals and predicting the motions he was thinking about. The unsupervised model sustained this accuracy level for more than a year and did not require explicit recalibration.

Using deep learning also sped up the time it takes the NeuroLife system to process a user’s brain signals and send it to the electrode sleeve. The current reaction time lag is 0.8 seconds, an 11 percent improvement over previous methods.

Read More: 2018/19 Innovations in Machine Learning, Blockchain, IoT and Data Analytics

“If you’re trying to pick up a glass of water, you want to think about it and move. You don’t want a long lag,” said Friedenberg. “That’s something we measure pretty carefully.”

[This article was directly syndicated from NVIDIA’s blog site with approvals]

1 Comment
  1. Copper forgings recycling says

    Copper scrap financial management Copper scrap storage solutions Metal recovery plant
    Copper cable scrap yard, Scrap yard operations, Scrap Copper processing

Leave A Reply

Your email address will not be published.