Magic Data Launches Conversational AI Datasets for Machine Learning
Magic Data, a global AI data service provider, has launched an accumulation of more than 200,000 hours of training datasets, including 140,000 hours of conversational AI training datasets and 60,000 hours of read speech datasets, covering Asian languages, English dialects, and European languages, boosting the rapid development of human-computer interaction in artificial intelligence.
Why conversational AI dataset?
Experiments show conversational data has better performance on ASR machine learning. Magic Data R&D Center works on conversational speech data and read speech data comparison, where 3,000 hours of conversational speech training data and read speech training data were respectively used to train Automatic Speech Recognition (ASR) models under customer service scenario, broadcasting, and navigation command. It shows that compared with read speech data, conversational speech data word accuracy is improved relatively up to 84%.
Recommended AI News: Pure Storage Partners with Meta on AI Research SuperCluster (RSC)
In addition, Magic Data R&D Center conduct experiment with 3,000 hours and 30,000 hours conversational training data. The result shows the more the conversational data is used, the higher the word accuracy comes.
Magic Data applies a series of measures to ensure data compliance and transparency. The internal processes are in accordance with industry security standards, and are GDPR compliant, ISO 27001 and ISO/IEC 27701:2019 certified.
Magic Data Tech is a global leading multi-modal AI data services provider. The company provides professional data services to enterprises and academic institutions engaged in artificial intelligence R&D on ASR, TTS, and NLP, with service fields covering finance, automobile, social networks, smart home devices & systems, and end-user device.
Recommended AI News: FANPAD-new Generation IDO Launchpad New Solution for Fantom Ecosystem
[To share your insights with us, please write to firstname.lastname@example.org]