Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Resemble AI Launches Deepfake Detector and Announces $8Million Series A Round Led by Javelin Venture Partners

 Resemble AI, a leader in generative voice artificial intelligence, announced the release of its deepfake audio detector, Resemble Detect, and an $8 million Series A round led by Javelin Venture Partners, along with Comcast Ventures, and follow-on investment from Craft Ventures, Ubiquity Ventures and Qasar Younis. The funds will be used to continue to deliver safe deployment of generative AI to the enterprise. Also generally available now is Resemble AI Speech-to-Speech voice conversion.

Recommended: AiThority Interview with Manuvir Das, VP, Enterprise Computing at NVIDIA

“We’re proud to support Resemble AI’s safe deployment of generative AI for the enterprise,” says Alex Gurevich, managing director at Javelin Venture Partners. “Their track record of enabling global entertainment and technology companies’ ethical use of voice AI is unparalleled across the industry.”

Now with more than 1 million users and 35 years’ worth of audio generated in the last 12 months, Resemble AI is focused on addressing AI fraud and safety concerns. Resemble Detect validates the authenticity of audio data to expose deepfake audio in real-time with up to 98% accuracy. The new offering analyzes audio across all forms of media and against all modern generative AI speech synthesis solutions.

Founded in 2019 with ethics as a core pillar, Resemble AI’s Ethical Statement prioritizes safety in building generative AI. In addition to requiring explicit user consent to clone voices, strict usage guidelines are enforced to prevent malicious use. Earlier this year, the company released another safeguard for enterprise customers, the PerTh Watermarker, which can detect whether a Resemble AI produced audio file has been manipulated. Now, with Resemble Detect, an additional layer of security is provided to those consuming AI-generated content.

“Our mission is to make interactions with digital products as human and natural as possible,” said Zohaib Ahmed, Co-Founder and CEO of Resemble AI. “We want to make AI tools available to more people, but we also acknowledge that we can’t talk about AI without talking about ethics. We’re proud to release Resemble Detect, another solution in our toolkit to ensure legitimacy.”

Resemble Detect uses a state of the art deep neural network trained to identify fakes from real audio. Here’s how it works:

Related Posts
1 of 40,972
  • Deepfake audio clips can have subtle differences that are inaudible to the human ear. The sonic material that results from the editing or manipulation of sound, referred to as artifacts, reside in the audio data. Resemble Detect identifies these artifacts, so any amount of inserted or altered audio can be accurately detected with 98% accuracy.
  • Resemble Detect has learned its own version of a spectrogram, allowing it to see different frequencies where artifacts could be contained. This embedding contains specific time and frequency information that allows it to make an optimum prediction between a real and fake audio file.
  • The audio data is then processed and run through a Deep Learning Model which outputs the likelihood of it being a fake from 0 to 1.

Resemble Detect is a market-ready solution for those in need of robust authentication, strong performance and a low complexity integration. The seamless Resemble APIs make it simple for developers to integrate Detect.

Recommended: AiThority Interview with Gregor Stühler, Co-Founder and CEO at Scoutbee

In addition to Detect, enterprises can now protect data from copyright infringement with PerTh Watermarker. Resemble AI’s PerTh neural speech watermarker is designed to remain persistent after being trained by any other speech synthesis model, providing an additional layer of security and allowing it to detect any modifications in the audio file. This ensures that enterprises have control over their data and can protect their IP from copyright infringement.

Resemble AI has also made significant improvements to its Speech-to-Speech model, increasing accuracy and robustness, and it is now available to all customers. With real-time voice conversion, natural-sounding AI voices can be created instantly by developers and creators across gaming, entertainment, interactive voice response (IVR), and more. AI voices can perform a wide range of emotions, speaking styles, and languages.

Recommended: AiThority Interview with Shafqat Islam, Chief Marketing Officer at Optimizely

[To share your insights with us, please write to sghosh@martechseries.com] 

Comments are closed.