[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

AiThority Interview with Zohaib Ahmed, co-founder and CEO at Resemble AI

Zohaib Ahmed, co-founder and CEO at Resemble AI discusses the benefits of adopting AI powered fraud prevention strategies in this AiThority interview:

_______

Take us through Resemble AI’s journey so far and more about your recent funding?

Resemble AI was founded in 2019 by Zohaib Ahmed and Saqib Muhammad who recognized the need for a scalable, high-quality way to generate and manage AI generated voice content. The company initially focused on creative industries such as gaming, media, and entertainment and launched tools that allowed users to create lifelike voices with just minutes of audio. As generative voice technology advanced, Resemble expanded into multilingual voice generation, speech-to-speech conversion, and real-time voice agents. At the same time, the team anticipated growing misuse risks and began building deepfake-detection, watermarking, and verification tools, ultimately evolving into a dual “create and protect” platform. Following an $8M Series A round in 2023, the company accelerated its shift toward enterprise-grade products, offering end-to-end solutions that combine generative voice creation with multimodal deepfake detection. Today, Resemble is the only platform securing enterprise generative AI from creation to distribution. The company has raised in total $25M from strategic investors such as Google’s AI Futures Fund, KDDI Open Innovation Fund, Okta Ventures, and Sony Innovation Fund.

What near future innovations can end users of the tool look forward to?

Earlier this month, Resemble AI launched DETECT-3B Omni, an enterprise-grade deepfake detection model that the company says achieves 98% accuracy across more than 38 languages. According to Resemble, the model currently ranks first on industry leaderboards for both image and speech deepfake detection, with 66% lower average error rates compared to competing systems. Next week, the company will open-source Chatterbox Turbo, a Voice AI model designed for real-time applications. The model includes “paralinguistic prompting” meaning the ability to generate non-verbal sounds like sighs or laughter. But what really is exciting about this new model is it operates at speeds up to 6-x faster than real-time on a GPU.

How and why are more businesses opening up to generative AI in fraud prevention and security?

Deepfake-related fraud caused $1.56B in losses in 2025 alone, and it is predicted generative AI could enable US fraud losses of $40B by 2027. The question is no longer whether generative AI deepfakes will impact your organization. It’s whether you’ll be ready when they do. The organizations that thrive in 2026 and beyond won’t be those with the most sophisticated AI, but those that combine technological capability with human judgment, strong processes, and cultural vigilance. They’ll view readiness and AI adoption to fight against AI threats not as a burden but as a strategic advantage. They’ll understand that in an age where seeing is no longer believing, trust must be systematically built and continuously verified.

A few thoughts around how leading businesses have effectively used AI powered tools to mitigate serious threats in the recent past, top learnings to share with tech teams around this?

Real-time detection stops incidents before they spread. Large financial services firms have deployed multimodal deepfake detection to monitor inbound voice and video communications. Catching impersonation attempts in real time prevented executives from approving fraudulent transactions and blocked social-engineering attempts aimed at customer-service teams.

Identity verification is becoming part of every high-risk workflow. Companies in sectors like telecom, insurance, and logistics now pair biometric verification or voice-print checks with AI detection models during sensitive interactions such as account recovery or high-value customer support calls. This has sharply reduced account-takeover attempts.

AI improves incident response by catching subtle anomalies humans miss. Global enterprises have used AI-driven monitoring to uncover compromised video streams, altered audio snippets, or manipulated internal training content. These signals often reveal early-stage intrusion or misinformation campaigns that would otherwise go unnoticed.

Related Posts
1 of 23,110

Also Read: AiThority Interview Featuring: Pranav Nambiar, Senior Vice President of AI/ML and PaaS at DigitalOcean

Key Learnings:

  • Assume humans can’t reliably spot deepfakes. Verification must be automated and built into systems, not left to manual review.
  • Focus on multimodality, not single-signal detection. Threats now span audio, video, and text models that analyze only one input are easy to bypass.
  • Treat verification as infrastructure. The most resilient businesses integrate AI verification into conferencing tools, identity workflows, and content pipelines rather than treating it as a one-off security add-on.
  • Continuously retrain and test models. Attackers evolve quickly; detection systems must update at the same pace to stay effective

Can you break down some myths surrounding generative AI backed verification for our readers?

Deepfakes are already easy to spot, people just need more training. The truth is human perception is no longer a reliable defense. A study found individuals are generally unable to accurately determine the source of an image, which in turn affects their assessment of its credibility. Verification now requires machine-speed, machine-precision detection, not human judgment.

Verification tools only work on one type of deepfake, like images or audio. In fact, modern attacks are multimodal. The same fraud attempt might involve a cloned voice, a fake video call, and falsified documents. The strongest verification systems like DETECT-3B Omni analyze audio, video, images, and text together, since deepfakes increasingly blend formats to bypass isolated defenses.Detection models can be fooled because attackers improve faster than defenders.” This used to be true but no longer is accurate. The newest generation of multimodal detectors learns from the same generative advances attackers use. Today’s leading models operate in real time, detect subtle artifacts across 30+ languages, and outperform typical adversarial techniques.
Attackers iterate quickly, and defenders can now detect and respond faster. 

Five thoughts around Artificial Intelligence and the future of business before we wrap up?

Here are a few ideas, some of which are included in our recent blog post about deepfake predictions for 2026.

Criminal groups are beginning to operate like fully optimized tech startups, assembling modular deepfake operations that include identity harvesting, voice cloning, video fabrication, and automated outreach. By 2026, we expect these capabilities to evolve into industrialized supply chains, enabling threat actors with no technical expertise to purchase end-to-end deepfake attack kits on demand. This shift will dramatically increase both the scale and accessibility of deepfake-enabled fraud, making enterprise-grade detection technology essential for every organization, not just high-risk targets.

Deepfake attacks targeting government officials in 2025 have pushed nations toward a point where real-time detection on official video calls becomes inevitable. We expect deepfake verification to shift from a recommended practice to a mandatory compliance standard, like past transitions toward encryption and MFA. This regulatory shift could establish “governments” as the fastest-growing “industry” for AI generated detection tools. Once governments adopt mandatory verification, regulated industries such as healthcare and financial services are likely to follow.In the enterprise world, generative AI has rendered traditional security awareness training ineffective, as employees can no longer reliably discern what is real. By 2026, the defining security gap will be between companies that modernize around verification, automation, and governance and those that continue relying on outdated training models. Identity has become the frontline of AI security, as most modern attacks, from voice-authorized wire fraud to deepfake video impersonations, exploit identity systems not built for AI-era threats. Organizations that apply zero-trust principles to every human and machine identity will be the ones that remain resilient and compliant.

Finally, corporate deepfake insurance premiums will likely increase. As deepfake incidents accelerate, insurers are recalibrating risk models, and companies that invest in detection technology could reduce premiums enough to offset the cost of the tools themselves.

Also Read: The Enemy Within: How to Manage ‘Shadow AI’ Without Stifling Innovation

[To share your insights with us, please write to psen@itechseries.com]

Resemble AI is a leading generative voice AI company that offers realistic text-to-speech (TTS), voice cloning, and real-time voice conversion, enabling users to create, customize, and control AI voices for various applications like gaming, e-learning, and content creation, while also providing enterprise-grade security with deepfake detection and AI watermarking for ethical AI use.

Zohaib Ahmed, is co-founder and CEO at Resemble AI

 

Comments are closed.