Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

The Evolving Wave of Human-Computer Interfaces

Scientific history is proof that every successful attempt in understanding a facet of the complexities of human minds has been a strong step towards human progression. Be it Sigmund Freud’s methods to understand ailing minds or Larson’s polygraph test, these inventions are still in use and applauded by all.

Now, scientists at Columbia University have had a breakthrough. Their invention has achieved an understanding of speech by interpreting brain activity.

Read More: How to Ensure AI Doesn’t Make Your Customers Hate You

Methodology

The initial phase of the research consisted of analyzing individuals that had normal human speech. Scientists studied brain patterns arising after the research subject speaks and were able to reconstruct a blueprint of sorts mapping sentences/words with brain activity.

The next phase consisted of applying the technology used in intelligent assistants such as Amazon’s Alexa and Apple’s Siri to amalgamate these patterns and convert them into a robotic voice.

Related Posts
1 of 2,176

Phase three was the development of an algorithm that was pre-integrated with the output of the robotic voice learned previously. The software programme christened ‘Vocoder’ was already, also, infused with brain patterns from the speech from individuals that can speak normally.

The last phase was the main challenge. Researchers made the vocoder learn the brain signals of patients been treated with Chronic Epilepsy. Vocoder by applying advanced capabilities of Machine Learning and Deep Learning, was, now, ready to transcribe brain signals to speech.

Read More: Cyxtera Reveals Research Finding IoT Devices Under Constant Attack

Results?

The results were beyond believable! Vocoder used on people who could not speak or lacked clarity of thought worked on 75% patients, which is outstanding for a beta-level software. The technology is predicted to be a boon for verbally disabled people. In an official comment, the university said that this breakthrough paves a concrete way for the future of this technology. The scientific community will be able to develop wearable products on people that can directly translate brain activity through a virtual voice. Like as an example, even a normal person can think, “I need a haircut” and the thought would translate automatically. Awesome, isn’t it?

Read More: Bentley Systems Introduces Mixed Reality App for Infrastructure Construction Projects Using Microsoft HoloLens 2 at MCW Event

Leave A Reply

Your email address will not be published.