Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Unbabel Launches MT-Telescope To Deeply Understand Machine Translation Performance

New open source tool helps developers and customers analyze and understand machine translation performance, and researchers rigorously benchmark their advances

Unbabel, an AI-powered Language Operations platform that helps businesses deliver multilingual support at scale, announced the launch of MT-Telescope – a new tool that enables developers and users of Machine Translation (MT) systems to deeply analyze and understand MT quality performance. Building on Unbabel’s automated quality measurement framework COMET, MT-Telescope is an open source tool that for the first time lifts the hood on MT quality analysis and provides unique granularity and quantitative insights into the quality performance of MT systems.

SysAdmin Appreciation Day: Top Industry Leaders Share their Insights on IT and Data Ops

“At Unbabel, we constantly work on developing, training, maintaining, and deploying MT systems at a rapid pace and to high quality standards. This challenging need drives our research and development objectives, especially in the domain of quality analysis and evaluation,” said Alon Lavie, VP of Language Technologies at Unbabel. “MT-Telescope helps our LangOps specialists and development teams make smarter decisions for customers about which MT system better suits their needs, and enables the MT research community to easily use best practice analysis methods and tools to rigorously benchmark their advances.”

Typically, MT quality measurement metrics such as COMET, BLEU, or METEOR provide an overall quality score for a data set. MT-Telescope takes this quality scoring a step further by exposing the underlying factors behind performance, and zooms into a fine-granularity analysis of translation accuracy down to individual words, terminology and sentences.

Related Posts
1 of 40,439

Recommended AI News: HashCash Product – BillBitcoins Provides Payment Processing Solutions to Malayasian Fintech

“Our research shows that one of the biggest needs in applying machine translation is insight into its usability, an area where current methods fall short,” comments Dr. Arle Lommel, senior analyst at CSA Research. “Guidance-focused evaluation that focuses on how well MT suits particular use cases will help extend the technology to new areas and increase acceptance of machine translation-based workflows.”

In addition to the greater degree of granularity, MT-Telescope has an intuitive visual browser interface that lets non-technical users to compare two MT systems and assess which is the better fit to meet their objectives. MT-Telescope’s visualizations provide comparison across three key areas:

  • A comparison of quality scores for subsets in the data, such as named entities (i.e. product or brand names), terminology (i.e. distinct phrases) or segment length (i.e. the length of the translated sentence)
  • A side-by-side error analysis of each overall MT system, allowing for substantive contrastive comparisons
  • A visualization of the distribution of quality scores between the two systems

Recommended AI News: Outreach Strengthens Leadership Team, Welcomes Melton Littlepage As New Chief Marketing Officer

Comments are closed.