Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Using Synthetic Voice to Expand the Scale and Reach of Content

Content in English has dominated the entertainment industry for decades. But as technology advances globalization, content creators will have to start thinking about how they can service their content in other languages. In doing so, they’ll be able to not only become more inclusive of different audiences but help them scale their reach across regions without jeopardizing the entertainment value of their content.

It’s for that very reason why many have shied away from translating their audio-based content. Whether for TV, movies, or even podcasts, translating audio and video content in the past meant forfeiting the unique voices of the program who typically only speak English. However, content creators can now overcome these hurdles by using synthetic voice technology.

Adtech Trends: Can you Guess the 2021’s Most Emotionally Engaging Holiday Ads in the APAC Region?

How to create an approved voice clone 

Voice cloning, a subset of synthetic voice, uses artificial intelligence to create a target speaker’s authentic, artificial voice clone. We’re adamant about acquiring the written and verbal approval from the individual whose voice we are cloning. We never accept approval from a third party on their behalf, such as the production studio.

After receiving the necessary approvals, the process begins by acquiring three hours of clean audio, called training data, of the target person’s voice. This audio is then translated and fed through a synthetic voice solution like MARVEL.ai, creating output in another language while maintaining the unique qualities of that person’s voice.

Related Posts
1 of 7,319

Latest AI ML Insights: Predictions Series 2022: Sony AI Leadership Discuss Coolest Innovations Built Using AI and Machine Learning

Due to the distinctive nature of language, regions, and dialects, proper translation requires a native speaker to validate the content. Veritone approaches this as a managed service and significantly minimizes the need for human intervention by selecting the best AI engine for the project at the outset. Proper engine selection is the key to reducing additional translation costs and maintaining project timelines.

What synthetic voice means for content creators

We recently worked with Bryan Barletta, the voice of Sounds Profitable, who discusses the latest AdTech in the podcast space, to translate and transform his content into Spanish using his voice clone. Mapping three voices in two weeks, he can now expand the reach of his content to a growing audience of Spanish-speaking podcast listeners.

But that’s just one use case of the technology. Recently, a popular Netflix show received backlash because the language dubbing did not match the actor’s actual voice. Popular TV shows and movies can avoid this by taking their talent’s voice and translating it into other target languages with synthetic voice technology.

Not only will it satisfy the audiences’ desire to hear that actor’s unique voice in other languages, but it opens the door to content creators in reaching audiences outside of their country of origin. When done ethically, the level of scale and reach that the creators can achieve across the board is something the world has never seen. While Veritone’s mission has always been to democratize AI, we are now seeing how the technology will democratize content creation.

 

Comments are closed.