AiThority Interview with Aaron Fulkerson, CEO, OPAQUE
Aaron Fulkerson, CEO, OPAQUE chats about the latest trends influencing AI adoption and deployment across organizations in this AiThority.com interview:
________
Hi Aaron, tell us about yourself and more about OPAQUE?
I grew up on a little farm south of San Jose, CA, dialing into BBSs and building MUDs before most people had heard of the internet. That early fascination with connected systems led me to distributed systems research at Microsoft, then to founding MindTouch in 2005, a platform that still powers tens of millions of visitors a month, and later to ServiceNow, where I helped build one of their fastest-growing products.
The throughline’s always been the same: technology scales when people trust it.
OPAQUE is a Confidential AI company, born from UC Berkeley’s RISELab and founded by Ion Stoica (co-founder of Databricks), Rishabh Poddar, and Raluca Ada Popa, one of the world’s leading researchers in privacy and security, who also leads research at Google Gemini and DeepMind. We give enterprises cryptographic proof that their data stays private before, during, and after every AI workflow. Not promises. Not policies. Proof. That’s the piece that’s been missing, and it’s the reason most AI initiatives stall at the pilot stage.
What highlights would you like to share around your recent funding round and the near-term goals for the platform?
What we’re seeing across the enterprise landscape is a pattern I call the three stages of AI adoption: Sandbox, Plateau, Powerhouse. Most organizations have proven that AI works. But they’re stuck at the Plateau, unable to move proprietary data into production AI because they can’t verify it’s protected. That trust gap’s widening as AI becomes more autonomous.
This Series B signals that the market recognizes Confidential AI as essential infrastructure. We’re using the capital to extend our cryptographic guarantees across training and inference, build post-quantum security into the stack, and enable sovereign cloud deployments for organizations that need to keep data within specific jurisdictions.
The near-term goal is straightforward: help enterprises move from Plateau to Powerhouse — from AI running on sanitized data to AI running on the proprietary data that actually creates competitive advantage, with cryptographic proof that nothing leaks.
What are some of the top factors affecting development and tech teams looking to deploy and scale AI workflows or challenges faced when putting AI innovations into production phases, based on your recent observations?
The bottleneck isn’t the models. The models are extraordinary. The bottlenecks that the most valuable enterprise data, proprietary, regulated, and competitively sensitive, sit behind a wall because security and legal teams can’t verify what happens to it during processing.
Here’s the gap. Encryption protects data at rest and in transit, but AI systems constantly process data, reason over it, generate outputs, and take actions. The moment data is “in use,” traditional encryption steps aside. That gap becomes enormous when you’re running agents across interconnected systems.
The math makes it concrete. If there’s even a 1% probability that a single agent leaks data, across 100 agents, you’re at a 63% probability of breach. At a thousand agents, 99.99%. You can’t manage that with policies and permissions alone. You need cryptographic enforcement at runtime, with verifiable proof of which code executed, which data was accessed, and whether every action stayed within approved bounds.
Also Read: AiThority Interview With Arun Subramaniyan, Founder & CEO, Articul8 AI
What security and privacy pointers should tech teams and innovators keep in mind as they deploy more AI for their business goals and needs?
The most important shift for security teams: stop thinking about AI security the way you think about application security. For decades, we’ve built security around the assumption that humans are the actors. Humans are slow, contextual, and intentional. Agentic AI erases those assumptions. Agents operate at machine speed, across systems and tools, and can be manipulated by adversarial inputs in ways humans can’t. Picture a hospital network where an AI agent triaging patient records gets compromised; it’s not stealing one file, it’s silently reclassifying thousands of diagnoses before anyone notices. Or a bank where an agentic workflow with delegated trading authority starts executing decisions based on poisoned data. These aren’t hypotheticals; they’re the attack surfaces that exist the moment you give AI systems autonomy over sensitive operations.
There’s a dimension most teams underestimate: data exhaust. What used to be ephemeral noise, API logs, process metadata, and interaction patterns, can now be analyzed to reconstruct proprietary logic and business intent. Think about a performance car manufacturer whose assembly-line telemetry gets ingested by a vendor’s AI. That data exhaust alone could reverse-engineer trade secrets.
Three things to get right. First, build cryptographic policy enforcement into the architecture from day one, not as a bolt-on. Second, demand immutable audit trails of what every agent did, when, and under what constraints. Third, treat governance as a scaling enabler. The organizations that embed verification into their AI stack will move faster than those that treat it as a gate, when trust’s built into the infrastructure, security and innovation stop competing.
What are some of the most exciting AI innovations from the global landscape that you are most looking forward to?
The thing I’m most watching isn’t any single model; it’s the emergence of the agentic web. We’re building a second internet, and its primary actors won’t be humans clicking links but autonomous agents transacting and reasoning at machine speed. Everything new’s something old moved up the stack. Agentic architectures are microservices made intelligent. AI gateways are load balancers for a new era. The patterns are familiar, but the implications are transformational.
What excites me most is the convergence of confidential computing and AI infrastructure. Just as HTTPS became the invisible default for the human web, confidential computing will become the default trust layer for the agentic web. We’re at the Netscape moment for AI trust. The technology exists. The standards are forming. And the regulatory signal’s unmistakable: DORA’s already live for EU financial services, the EU AI Act’s enforcement provisions are ramping up, and the SEC isn’t far behind. They’re all converging on the same demand: prove your AI handles data the way you say it does.
I’m also paying close attention to post-quantum cryptography. Quantum computing will eventually break current encryption, and the organizations building quantum-resilient protections into their AI infrastructure now are the ones who won’t be scrambling later—the window to get ahead of that curve is narrowing.
SalesStar Podcast Alert: catch, the latest in SalesTech and MarTech with the SalesStar Podcast by SalesTechStar
Five AI innovators (people/tech companies) you’d like to highlight in this conversation before we wrap up?
Najwa Aaraj – CEO of the Technology Innovation Institute and one of the most important figures in global AI research infrastructure. TII’s work on open-source models like Falcon and its investment in post-quantum cryptography are laying the foundations that the entire industry will build on. The commitment to open research at that scale, from Abu Dhabi, is reshaping where breakthrough AI work happens.
Ion Stoica — Co-founded Databricks, co-founded OPAQUE, and keeps turning Berkeley research into category-defining companies. His ability to see where infrastructure needs to go before the market catches up is extraordinary. The throughline from Spark to confidential AI is clearer than most people realize.
Raluca Ada Popa — Co-founder of OPAQUE and one of the world’s foremost researchers in cryptography and security, who also leads research at Google Gemini and DeepMind. The foundational research she’s led at Berkeley is what makes confidential AI possible. Full stop.
Lex Fridman — I love his podcast. What Lex does better than anyone is make deeply technical conversations accessible without dumbing them down. He’s created one of the most important platforms for AI discourse, long-form, rigorous, and genuinely curious. That kind of bridge between research and public understanding matters more than people give it credit for.
Rishabh Poddar — Co-founder of OPAQUE and a prolific researcher whose published work at UC Berkeley’s RISELab laid the cryptographic and systems foundations for confidential AI. His academic contributions span secure computation, encrypted databases, and privacy-preserving analytics, the kind of foundational research that doesn’t make headlines but makes entire product categories possible. What Rishabh built in the lab is now protecting enterprise data in production.
Also Read: Cheap and Fast: The Strategy of LLM Cascading (Frugal GPT)
[To share your insights with us, please write to psen@itechseries.com]
OPAQUE is a Confidential AI company. Born from UC Berkeley’s RISELab and founded by Ion Stoica, Rishabh Poddar, and Raluca Ada Popa, OPAQUE enables enterprises to safely run models, agents, and workflows on their most sensitive data. Its Confidential AI platform delivers verifiable runtime governance—cryptographic proof that data, models, and agent actions remain private and policy-compliant throughout every AI workflow. Customers and partners include ServiceNow, Anthropic, Accenture, and Encore Capital.
Aaron Fulkerson is CEO of OPAQUE, the Confidential AI company. He previously founded MindTouch, an enterprise knowledge platform powering tens of millions of visitors monthly, and served at ServiceNow, where he helped build two of the company’s fastest-growing products and business units. His career spans two decades of building enterprise platforms at the intersection of trust and technology.
Comments are closed.