Despite the growing presence of deepfake voice fraud, Americans believe that they can to detect a CG-voice pretending to be a human voice.; Research by ID R&D published as biometric firm prepares to demo voice anti-spoofing solutions at CES
Despite the growing presence and impact of deepfake voice fraud, more than one-third of Americans were confident that they would be able to detect a computer-generated voice pretending to be a human voice (36%), according to new survey results released today by ID R&D. Just 30% of Americans were not confident they would be able to detect the difference and prevent cybersecurity crimes.
27% of the Americans Would Engage with Chatbots More If They Could Be Sure of a Secure Transaction
- Over a quarter of Americans (27%) would prefer to log into devices and accounts using their voice instead of having to enter alphanumeric passwords. When asked if they’d prefer a highly secure, but easy to use method (voice or face biometrics) of logging in to access sensitive applications and accounts instead of an alphanumeric password or pin the number jumps to 40%.
- A third of Americans (33%) would use Alexa or other home voice assistant to access account information if they were confident the transaction was secure.
- Over a quarter of Americans (27%) would engage with chatbots more if they could be sure of a secure transaction.
- Nearly a third (32%) would use chatbots more if they knew they could conduct an entire transaction without being transferred to a customer service representative.
Commissioned by ID R&D, provider of AI-based biometrics and voice and face anti-spoofing technologies, the research underscores the need for better and more effective consumer education on threats of deepfake voices and the technology capable of combating it. The research comes at a time of increased frequency of voice deepfakes, or computer-generated or synthetic speech used to impersonate a human. ID R&D will be demonstrating its biometric solutions designed to prevent spoofing attacks involving deepfakes and recordings at the Consumer Electronics Show (CES) from January 7-10, 2020 at Booth #51112, in Tech West at the Sands Expo, Hall G.
At an increasing rate, voice deepfakes have been deployed by fraudsters to impersonate authorized human voices. Company executives have been cheated out of hundreds of thousands of dollars by voice deepfakes created to impersonate other company executives.
Bad actors utilizing artificial intelligence (AI) and machine learning (ML) are able to generate deepfake voices that are wholly imperceptible to the human ear, and as recent research suggests, to the human brain as well. Researchers from the University of California Riverside, University of Alabama at Birmingham, and Syracuse University recently shared their research for the 2019 Network and Distributed System Security Symposium (NDSS), and suggested there may be no significant differences in how the brain processes authentic human speech and morphed speech (2*). Synthetic speech is not only largely undetectable to the human ear at a conscious level, but it may also be indistinguishable at the neural level.
Finding the Best Way to Protect Against the Threat Is the Development and Deployment of AI-Based Anti-Spoofing Countermeasures
As criminal use of voice deepfakes increases, the best way to protect against the threat is the development and deployment of AI-based anti-spoofing countermeasures that can detect what humans cannot. Founded in 2016, ID R&D develops AI and ML biometric authentication solutions and anti-spoofing capabilities for mobile, web, and IoT applications. With a research and development team with decades of biometric experience, the company aims to make authentication simple but also highly secure, through a process that confirms not just the identity of the user, but that the user isn’t being impersonated by a deepfake voice or face. ID R&D recently finished first in the 2019 ASVspoof Challenge, a global challenge to detect synthetic speech from human speech. It’s the second time the company’s technology earned the #1 distinction.
ID R&D’s research also found that despite the confidence that users have in being able to detect a deepfake voice, 66% of US adults agree they would be concerned that voice biometric technology could allow someone to impersonate their voice and gain access to their accounts.
“This research shows that the biometric industry has a lot of work to do to educate consumers around legitimate security issues in voice technology,” said Alexey Khitrov, ID R&D President. “Juniper Research estimates that there will be 8 billion digital voice assistants in use by 2023, yet our research suggests that despite such strong adoption, concerns about security are curbing what could be even stronger growth. Those of us in the biometric industry have a responsibility to educate consumers about the risks of deepfakes and synthetic voice, but also a real opportunity to educate consumers about the many benefits of biometrics, including improved security. Just as the consumers in the early 1990s were suspicious about online commerce but now can’t imagine life without it, we believe that once users learn how biometrics can better protect their data and accounts while delivering an all-around better experience across all applications, voice technology will see exponential growth.”
ID R&D’s research was conducted by YouGov from December 4-5, 2019. The survey was carried out online. The figures have been weighted and are representative of all US adults (aged 18+). It was aimed to evaluate consumers’ attitudes toward voice technology preferences and behavior.
ID R&D is a provider of multimodal biometric security solutions headquartered in New York, NY. With extensive experience in biometrics, ID R&D combines science-driven technological capabilities with leading research and development to deliver seamless authentication experiences. ID R&D’s solutions are available for easy integration with mobile, web, and IoT applications, as well as in smart speakers, set-top boxes, and other IoT devices.