Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

IBM AI Provides Ultra-Modern Captioning for News Broadcasts

0

IBM researchers have devised a software architecture that can achieve best-in-class results for captioning news broadcasts. Only about two years ago, the company had achieved something similar with transcriptions which is not as easy as it sounds. The machine learning driven initiative had to outsmart a plethora of obstacles before reaching its goal. Now, researchers of the Armonk, New-York based software giant have achieved a breakthrough in captioning capabilities. They have detailed their findings in a paper and will be presenting it later at a conference in Brighton.

Read More: Drones Officially Allowed to Deliver a Variety of Items in Australia’s Canberra

IBM states the technology was hard to develop considering background noises and news anchors speaking about a wide range of topics. Also, there was a large volume of disparate subjects like onsite interviews, multimedia, TV show clips et al.

As IBM researcher Samuel Thomas explains in a blog post, the AI leverages a combination of long short-term memory (LSTM) — a type of algorithm capable of learning long-term dependencies — and acoustic neural network language models, along with complimentary language models. The acoustic models contained up to 25 layers of nodes (mathematical functions mimicking biological neurons) trained on speech spectrograms, or visual representations of signal spectrums, while the six-layer LSTM networks learned a “rich” set of various acoustic features to enhance language modeling.

Read More: World’s Largest Business Organization Partners with Perlin for Blockchain Adoption Across Its 45 Million Members

IBM researchers followed the below-mentioned modus operandi –

  • The entire system was fed with 1,300 hours of data that was imported from the Linguistic Data Consortium
  • The researchers deployed AI on the test set — the set consisted of two hours of data from six shows all tied together by 100 overlapping speakers
  • Then there was a second test with four hours of data from 12 shows with 230 overlapping speakers
  • For measuring results, IBM worked with speech and search technology firm Appen
  • The results — 6.5% & 5.9% on the first and second test respectively
  • This was deemed a little poorer than human performance (3.6% and 2,8% on the first and second test respectively)

Read More: NASA TV Coverage Set for April 17 Cygnus Launch to International Space Station

Leave A Reply

Your email address will not be published.