Evolving Speech Technologies: InnerVoice Unreal App. The System Behind The InnerVoice Unreal App
Elara Systems collaborated with iTherapy to develop a new communication training tool for children with autism on mobile applications. The app – InnerVoice Unreal – creates an engaging and entertaining space for typical and neurodiverse children to find their voice. Powered by Unreal Engine, our developers at Elara were focused on bringing iTherapy’s communication training to a mobile platform.
InnerVoice Unreal was designed as a mobile application dedicated to improving the quality of life for people who struggle with communication challenges. Children with autism can often find it difficult to learn the intricacies of speech through face-to-face interactions due to a reluctance to make eye contact or engage in interpersonal communication. In turn, InnerVoice aims to provide a communication and speech therapy application that bypasses existing behavioral barriers. Through this tool, users can tap emojis and type out text to make 3D characters speak the selected words and emote back to them, helping to develop valuable communication skills. To bring this app to life, Elara’s foundational solution required the creation of customizable 3D animated avatars to replicate human emotions, facial expressions, and speech movements in order to ensure the efficacy and accuracy needed for speech therapy.
Recommended AI News: Platform9 5.5 Release Improves Developer Productivity, Simplifies Cloud-Native Operations
Avatar Customization
The ability for users to customize their avatar’s voice and character was integral to ensuring a strong foundation for this speech therapy application. Stellar character modeling was at the root of the initial push for development, with creative minds focused on depicting a variety of friendly, childish faces. During their creative iteration, Elara’s engineers worked to integrate Amazon’s cloud computing into Unreal Engine 4, giving them access to the latest in modulated voice-synthesis technology. The resulting framework proved to be an important first step towards success.
Voice-Synthesis
When bringing these characters to life, our animators had to pay close attention to how the face and mouth moves as each new sound is formed. Referred to as visemes, these distinct shapes are the key behind comprehensive lip reading. After extensive effort was taken to ensure proper readability, the visemes were paired to the phonetic enunciation of each character’s speech. The result was a robust and reliable system that would allow for the procedural application of speech movements, to maximize accuracy and visualization.
Recommended AI News: How Startups are Leveraging the Cloud to Scale
Dynamic Emotions
Phonetic enunciation, however, was only one portion of what the InnerVoice Unreal App aimed to depict. Many children with autism often struggle to learn facial expressions and speech articulation independently. As such our animators were tasked with ensuring that our characters depict an extensive range of emotions, which could then be blended into our visemes.
When all of the application’s various components are pieced together, the InnerVoice Unreal application provides a fun, simple system that is easy to learn and navigate. Its procedural lip-syncing and speech synthesis make for a powerful tool when combined with its approachable, stylized characters. Elara Systems made great use of the Unreal Engine platform by Epic Games, to provide optimized visuals on a mobile-friendly application. With a strong background in VR technology and 3D animation, Elara is proud to support iTherapy in its progress towards evolving, evidence-based practices.
Recommended AI News: LivePerson Collaborates with UCSC to Build the Future of Natural Language Processing
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.