Marketers Can Harness the of Power Machine Translation to Localise Their Websites and Multilingual Campaigns Says Weglot
A new study on the accuracy of AI and Machine Translation (MT) software has shown the tools are more accurate at translating written text than people might think – in some cases requiring zero edits from professional linguists.
A global shortage of skilled interpreters is continuing to drive the development of AI and Machine translations, with Meta recently announcing its own AI translator tool for more than 200 languages. But there is still a reticence to use MT for marketing content because of often unproven prejudices that have built over the years in the localisation industry.
The aim of the research was to debunk those common myths and prejudices. Conducted by Weglot and language consultants Nimdzi, the research evaluated and compared five of the leading Machine Translation providers – Amazon Translate, DeepL, Google Cloud, Microsoft Translator, and ModernMT.
Recommended AI News: Zebra Pen Launches Augmented Reality Consumer Experience
Commenting on why they set out to perform this research, Augustin Prot, CEO at Weglot said, “We wanted to test the leading Machine Translation tools with marketing content and “exotic” languages like Arabic and Chinese – which are often avoided because of the alleged lower translation quality.”
The MT tools were tested on their accuracy and reliability in translating 168 different segments containing more than 1,000 different words from American English into French, German, Spanish, Simplified Chinese, Arabic, and European Portuguese.
Reviewed by professional linguists, 85% of the 14 translations were scored as ‘Very good’ or ‘Acceptable’, with none of the Machine Translated material scored as ‘Very bad’.
Italian was the most difficult language to translate with an average acceptability score of 2.6, while German scored the highest at 3.4. The rest of the scores include Spanish (3.2), Portuguese (3), Arabic (3), French (2.9), and Simplified Chinese (2.8).
Of the 168 different word segments tested on the software, German again came out on top, with 145 sections not requiring any edits from the professional linguists after being translated, compared to Portuguese which had just 58 unedited sections.
However, the simple ampersand (&) proved to be a recurring problem for the MT tools, while there was also confusion between Brazilian and European Portuguese as well as contextual and punctuation issues.
10 out of the 14 total reviews of the quality of the translations were scored as being “positively surprised” by the two professional linguists, with the MT output of better quality than originally expected.
Recommended AI News: Omnicom Precision Marketing Group Leads Forrester’s Creative Agency Assessment
[To share your insights with us, please write to firstname.lastname@example.org]