Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Neosapience Brings True Text-Driven Emotional Range and Style Control to AI for the First Time

Neosapience, a startup that operates AI-powered virtual actor service Typecast, which uses AI to create synthetic voices and videos, announced new, industry-first capabilities that bring an unprecedented level of depth and richness to AI actors. Its text-driven emotion style control feature allows Typecast users to type any emotion imaginable into a script of dialogue and Typecast’s AI system will recognize and understand the request using Natural Language Processing.

As the Typecast AI understands the description of emotion written in natural language, a user can adjust emotions and virtual actors will deliver lines with natural cadence. For example, Typecast’s AI can digest a sentence like: (sad but slightly mad) “I don’t care” or (disappointed but seems okay) “I don’t care” so that the AI actor uses the appropriate intonation, emphasis, style, and speed, delivering it the way that human actors act. AI systems have never been able to display emotions in this way before. This groundbreaking development opens up tremendous new options for content creation.

“Our team has worked relentlessly to push the boundaries of what is possible with AI to create meaningful experiences,” said Taesu Kim, co-founder and CEO of Neosapience. “We’ve taken a huge step forward in terms of the flexibility of AI to express sophisticated emotions by understanding written descriptions – in natural language– of those emotions, which is exactly what a human actor does. Now anyone hoping to use virtual actors in a project – be it a YouTube short, a company presentation, voiceover for a feature film, or countless other purposes – can do so for a fraction of the cost and time associated with using a human actor. We believe that with this latest innovation, Neosapience and our Typecast service will elevate creativity and spark new forms of content.”

Recommended AI News: Eko Awarded $2.7 Million SBIR Grant from NIH to Develop Pulmonary Hypertension AI

Traditionally, AI voice over systems and the vendors that offer them give only a few options for character emotions; specific emotions are designed by the provider, leaving creators with very limited choices to attach to their scripts. Additionally, creators have not been able to scale or customize emotions in their virtual actors based on what’s called for in a script. These issues often result in flat or incongruent dialogue, which has slowed widespread adoption of virtual actors to date. Now with Typecast, users simply type in any emotion they can think of, and the AI will recognize and understand the meaning of the language, using it to inform and influence speech delivery.

Text-driven emotion style control frees virtual actors to express a full range of emotions appropriately and with nuance, rapidly responding in context as dialogue unfolds.

Related Posts
1 of 41,009

Neosapience’s research team has spent years studying and developing AI systems that can replicate the voices, mannerisms, and most distinctive elements of human voices and actions to open possibilities for creators and organizations.

Recommended AI News: LZG International, Inc. (FatBrain AI) Announces Plans to Uplist to OTCQB Venture Market

Starting today, the Typecast system’s speech synthesizer takes emotional style as an input embedding vector learned from large speech data; it now learns the relationship between natural language and emotion style in speech data. It then converts natural language inputs indicating emotion and vocal style into an embedding vector representing emotion and vocal style and delivers it to a speech synthesizer.

Users can write any text to describe emotions in parentheses, such as “(so happy and loudly saying)” or (sad just like his dad had passed away),” rather than selecting from a list of simple emotions, such as “happy” or “sad.” And as always in Typecast, they can control the speed of speech, rhyme, and emphasis on specific words by using the natural language that is instructed. Depending on the characteristics of the vocalization (emotions), the facial expression of a virtual actor can also be adjusted.

“Typecast’s new Emotion Style Control capabilities are unlike anything we’ve seen, we’re so glad we can save so much money and time in terms of casting real actors and shooting in a studio.” said one of the Typecast users who owns a Youtube channel with more than 200K subscribers. “By inserting a few simple directions, our virtual actors can express our intent completely. Now we can use virtual actors in so many more types of projects without compromising quality, which will save us significant amounts of time and money as we no longer have to work around the schedules and cost constraints associated with human actors.

Recommended AI News: Introducing WordPress VIP for Salesforce: a Powerful, Time-saving Integration for Content Marketers

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.