AiThority Interview with Dan Brahmy, CEO & Co-Founder of Cyabra
Dan Brahmy, CEO & Co-Founder of Cyabra shares more about Generative AI, online disinformation, the intersection of AI and cybersecurity and more in this quick chat:
———-
Hi Dan, can you share your career journey and what led you to co-found Cyabra? What was the driving force behind starting a company focused on social threat intelligence?
I’m here because I deeply admire my co-founders. I had known Yossef Daar for a few years, trusted him, and believed in his vision. When the opportunity came to build something meaningful together, I didn’t hesitate. I wanted to be part of something with real purpose—something that could make a difference. Looking back, I realize how lucky I was to take that leap.
The driving force behind Cyabra was the urgent need for a solution that could cut through the noise of online narratives, detect harmful digital manipulation, and protect brands, governments, and individuals from disinformation. Our mission was—and still is—to bring transparency to the digital world by identifying threats before they escalate.
Also Read: The Convergence of Intelligent Process Automation and Agentic AI
Which industries are most vulnerable to online disinformation today, and how does Cyabra help them stay protected?
No industry is immune to disinformation, but some sectors are particularly vulnerable. Corporate brands, national security, financial markets, and the political sphere are frequent targets. Disinformation campaigns can sway elections, crash stock prices, or damage reputations overnight.
Cyabra helps by using AI-powered to detect, analyze, and monitor these risks. Our platform identifies fake accounts, AI-generated content, bot networks and coordinated influence campaigns, allowing organizations to take preemptive action. Whether it’s protecting a company from a viral smear campaign or preventing foreign interference in elections, our mission is to restore trust in digital conversations.
The rise of Generative AI has revolutionized many industries, but it also poses new threats. What are the most pressing dangers of GenAI in your view?
Generative AI has accelerated the speed, scale, and sophistication of disinformation. We are now seeing highly convincing deepfake videos, synthetic voice clones, and AI-generated propaganda that can mislead audiences at an unprecedented level.
The biggest danger lies in the ability of bad actors to create hyper-personalized disinformation, targeting individuals and communities with precision. This can lead to financial fraud, political manipulation, and reputational damage. As AI evolves, so must our defense mechanisms to counter these emerging threats in real time.
How does Cyabra’s AI adapt to emerging threats, particularly with the rapid evolution of GenAI-generated disinformation?
The key to combating evolving threats is adaptability. Cyabra’s AI continuously learns from new data, identifying emerging patterns in GenAI-generated content. We leverage deep learning, behavioral analysis, and real-time monitoring to differentiate between authentic and manipulated narratives.
Our system also integrates human expertise with AI detection, ensuring that as bad actors develop more sophisticated techniques, we stay one step ahead. The goal is not just to detect threats but to predict and prevent them before they cause harm.
Also Read: Building Long-Term Success Through Enhanced Data Quality
We’d love to hear your thoughts on the intersection of AI and cybersecurity over the next five years, and what businesses must do to prepare for the next wave of AI-driven threats.
AI and cybersecurity will become inseparable in the coming years. As cyberattacks become more automated and AI-powered, businesses must embrace AI-driven defense mechanisms. Companies should invest in real-time threat intelligence, prioritize digital literacy, and develop proactive strategies to identify and neutralize threats before they escalate.
The next wave of AI-driven threats will be stealthier and more adaptive. Organizations that fail to leverage AI for security will find themselves at a disadvantage. It’s not just about responding to threats—it’s about predicting and preventing them.
Before we close, if you could debunk one common myth about AI-generated content or disinformation, what would it be?
One major myth is that AI-generated disinformation is easy to spot. While early deepfakes and AI-generated texts had clear telltale signs, today’s synthetic content is nearly indistinguishable from authentic material. Believing that “I can tell what’s fake” is dangerous and often leads to complacency.
The reality is that bad actors are always innovating. This is why companies, governments, and individuals need sophisticated tools—like Cyabra—to uncover and counter disinformation before it spreads.
[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]
Dan Brahmy is the CEO of Cyabra, bringing a wealth of experience from his previous roles as a senior strategy consultant at Deloitte Digital and an SMB Sales representative at Google EMEA.
Cyabra is leading the fight against disinformation. Our AI shields companies and governments by uncovering fake profiles, harmful narratives, and GenAI content.
Comments are closed.