Sensory Unveils VoiceHub Portal for Fast, Free, Flexible Wake Word and Voice UI Model Development
VoiceHub’s intuitive interface allows users to quickly build vocabularies in dozens of languages with no coding experience required
Sensory, a Silicon Valley company pioneering AI at the edge and the de facto standard for enabling a voice UI on apps and devices, unveils VoiceHub, a convenient online portal that enables developers to quickly create wake word models and voice control command sets for prototyping and proof-of-concept purposes. Designed with flexibility in mind, VoiceHub allows users to easily select languages and model sizes through drop down menus.
Sensory’s VoiceHub empowers developers with free tools to immediately create custom wake words and voice command sets for their applications. These projects take just moments to put together and some models are trained and downloadable within an hour of submitting them. VoiceHub outputs wake word and voice command set models, compatible with a companion Android application for quick prototyping, or as code for specific target DSPs for more advanced proof-of-concept testing. The tools offer vast flexibility, allowing developers to create wake word models, either custom branded or based on today’s most popular voice assistant platforms, and command set models targeting a desired memory footprint. This makes it great for all applications, ranging from ultra-low power, resource limited wearables to high-power, high-performance appliances on the edge.
Based on Sensory’s industry leading TrulyHandsfree technology, the same technology powering the voice user experience on over 1 billion apps and devices, VoiceHub supports numerous languages, making it ideal for testing voice control across global product lines. Additionally, since VoiceHub trains voice models similarly to TrulyHandsfree, the wake word and voice control models created in VoiceHub are very accurate and in most cases suitable for mass production. Testing conducted by Vocalize.ai indicates that VoiceHub-generated wake words provide accuracy and performance equal to or better than that of industry benchmarks, such as Amazon’s Alexa.
“Sensory applied decades of experience and lessons learned with shallow net technologies to create its highly accurate machine learning models, which are trained with a mix of real and probabilistically-derived synthetic data,” said Todd Mozer, CEO at Sensory. “VoiceHub benefits from all of this work and removes any friction related to developing voice UIs for testing purposes. Furthermore, our VoiceHub models set the bar very high for accuracy and overall performance. We are excited to share these tools and capabilities with the speech tech community and beyond and believe VoiceHub will serve as a catalyst for rapidly accelerated innovation of new voice-enabled experiences.”
Over the last few months, Sensory has been previewing VoiceHub and its capabilities with several high-profile speech technology companies and the response has been overwhelmingly positive. With this announcement, interested parties may now request an invitation to access VoiceHub for their own prototyping and proof of concept projects.
Sensory will be presenting an overview of VoiceHub, complete with demos at Voice 2020, later this month. VoiceHub users can expect a steady stream of updates and new features, including support for more languages, expanded DSP platform support, and the ability to quickly develop large vocabulary natural language models.
“Many of the brands building on Audio Weaver have been using Sensory’s TrulyHandsFree technology to power voice UI for years, so we anticipate a lot of interest in VoiceHub,” said Chin Beckmann, CEO of DSP Concepts. “Adding VoiceHub to Audio Weaver gives brands the ability to integrate custom voice commands more efficiently than ever – instantly bringing the full vision of their branded customer experience to life at the prototype phase through to deployment. Voice UI and voice assistants are only beginning their takeover of all the devices that surround us and, with this integration, Audio Weaver and Sensory will be at the center of it all.”
“Sensory’s new VoiceHub significantly lowers the threshold for developers at brands and businesses to customize wake words,” explains Dan Miller, lead analyst at Opus Research. “These audio cues are becoming key to their customer retention and branding strategies.”
“Voice control is now established as an essential feature across Consumer Electronics, with the installed base of ‘built-in’ virtual assistants expected to rise 12% YoY to reach 3.8 billion devices by the end of 2020,” says Simon Forrest, principal technology analyst at Futuresource Consulting. “The audio processing elements of virtual assistants are maturing, and development focus is now upon enhancing language models to follow a pathway towards conversational assistants. With edge-AI performance increasing to allow language models to run locally, alongside emerging trends towards integrating domain-specific assistants, the market for built-in voice remains robust and is predicted to double over the next five years. VoiceHub intersects neatly with these trends, enabling the expansion of virtual assistants across languages and domains.”
Recommended AI News: Daily AI Roundup: The 5 Coolest Things On Earth Today