AiThority Interview with Dr. Petar Tsankov, CEO and Co-Founder at LatticeFlow AI
Dr. Petar Tsankov, CEO and Co-Founder at LatticeFlow AI chats about the importance of transparent AI governance in this AiThority interview:
___________
Could you briefly describe what LatticeFlow AI does and the core problem it solves in the enterprise AI lifecycle?
At LatticeFlow AI, we help enterprises build and deploy AI systems that are trustworthy, safe, and compliant, producing the technical evidence needed to prove it. We turn abstract governance and regulatory requirements into concrete, actionable technical controls.
As AI becomes embedded in critical business processes, companies need more than performance metrics: they need clear, independent validation that their models are also safe, secure, compliant. Ready for real-world use.
As the first company to publish a technical framework for the EU AI Act, LatticeFlow AI is recognized as a leader in enabling trustworthy, responsible, and compliant AI adoption at scale.
How do you define “trustworthy AI” in today’s enterprise context, and why is it a critical priority for CIOs and CTOs?
Trustworthy AI goes beyond ethical principles or mission statements. It’s about whether an AI system can be trusted to perform accurately and securely in the real world, under messy, shifting, high-stakes conditions.
The problem is: most enterprises still lack clear, evidence-based insights into how their AI systems behave outside of controlled environments. This creates a gap between governance principles and actual AI deployments. As a result, many models never make it into production, because teams can’t prove they’re safe, robust, or reliable enough in practice.
For CIOs and CTOs, this isn’t just a compliance concern, but a major blocker to AI adoption and innovation. Without trust, backed by hard evidence, AI remains a prototype, not a business asset. That’s why enterprises are now prioritizing systematic AI validation: to understand where their models fall short, how to fix them, and how to demonstrate accountability to regulators, customers, and internal stakeholders.
Also Read: AiThority Interview with Suzanne Livingston, Vice President, IBM Watsonx Orchestrate Agent Domains
In your view, what are the biggest gaps or risks organizations face today when deploying AI models in critical business functions? How does LatticeFlow AI address these specific challenges?
The core challenge is how to operationalize AI governance. Most organizations don’t have the deep technical controls needed to assess how an AI system performs under real-world conditions, how resilient it is to data shifts, or whether it introduces unacceptable risks like security vulnerabilities or biased outputs. As a result, AI deployment gets delayed, slowing down innovation.
LatticeFlow AI bridges this gap. We deliver use-case-specific evaluations that make AI performance and risk measurable, actionable, and governable. Our technology uncovers hidden failure modes, recommends effective mitigations, and provides the technical evidence needed to align with internal risk controls and regulatory requirements. That’s how we enable enterprises to move from uncertainty to trust, and scale AI adoption responsibly.
With increasing regulations around AI ethics and explainability, how is LatticeFlow helping enterprises ensure compliance and transparent AI governance?
As AI regulations like the EU AI Act gain traction, building accurate models is not enough: you need to prove they’re safe, secure, fair, and compliant.
At LatticeFlow AI, we help enterprises close this gap through three complementary solutions:
- COMPL-AI, co-developed with ETH Zurich and INSAIT and welcomed by the EU AI Office, is the first compliance-centered framework for evaluating GenAI models against the EU AI Act. It translates regulatory principles into actionable, technical evaluations.
- AI Insights builds on COMPL-AI to provide independent, evidence-based insights about business readiness of GenAI models. Enterprises use AI Insights to understand where a model falls short and how to mitigate risks before deployment.
- AI GO! is the trusted AI governance and operations suite to accelerate AI adoption and innovation at scale. It connects business risks to deep technical controls, helping teams automate risk evaluation, monitoring, and mitigation across both internal and third-party AI systems.
Together, these solutions allow enterprises to move from high-level governance principles to measurable, auditable controls, so they can adopt and scale AI responsibly.
Also Read: AI Architectures for Transcreation vs. Translation
Where do you see the biggest innovation opportunities in AI trustworthiness over the next 3–5 years? How is LatticeFlow positioning itself to lead in that future?
The biggest opportunity lies in making AI trust actionable. Today, most organizations lack the technical evidence needed to validate whether models are high performing, safe, or compliant. This gap prevents many from deploying AI in critical business functions.
In the next few years, innovation will focus on:
- Translating governance into technical controls tailored to specific use cases.
- Automating risk assessment and mitigation across the AI lifecycle, from development to deployment.
At LatticeFlow AI, we’re addressing this by building the infrastructure to operationalize trust:
- COMPL-AI, the first framework to evaluate GenAI models against the EU AI Act.
- AI Insights, which delivers independent business-readiness evaluations of GenAI models.
- AI GO!, an AI governance and operations suite helping enterprises define and scale AI controls linked to performance, safety, and compliance.
The goal isn’t just to define trustworthy AI, but to enable it, at scale.
For CIOs and CTOs just beginning their AI adoption journeys, what critical steps would you recommend to ensure their AI initiatives are safe, reliable, and deliver business value?
Many AI initiatives don’t move forward. Not because of technical limitations, but because organizations lack the evidence and controls needed to build trust. They get stuck at the point where internal or regulatory stakeholders ask: Can we really rely on this model in production?
To avoid that, I recommend three key steps:
- Define use-case-specific risk and performance criteria from the start.
- Generate independent, evidence-based insights to evaluate models against those criteria.
- Implement targeted controls and monitoring mechanisms to close gaps and prepare for safe scaling.
At LatticeFlow AI, we help teams move beyond uncertainty, turning abstract governance principles into concrete technical actions that unlock safe and responsible AI adoption.
[To share your insights with us, please write to psen@itechseries.com]
Dr. Petar Tsankov is a researcher and entrepreneur in the field of Computer Science and Artificial Intelligence (AI). He graduated with a BSc degree in computer science from Georgia Tech in the United States, winning the Best Student in Computer Science Award. He is currently the co-founder and CEO of LatticeFlow AI. LatticeFlow AI won several awards, including the Swiss AI Award in 2022 and the US Army Global AI Award in 2021. It was also featured on the prestigious CB Insights AI100 List of Most Innovative AI Companies in 2022 and 2023.
LatticeFlow AI empowers enterprises to deploy AI systems that are high-performing, trustworthy, and compliant, bridging the gap between AI governance frameworks and technical operations.
Comments are closed.