[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

AIThority Interview With Rohit Agarwal, Founder & CEO of Portkey

Rohit Agarwal, Founder and CEO of Portkey
chats about the benefits of unified control planes in this catch up with AiThority:

________

Hi Rohit, tell us about Portkey and the journey around your recent funding.

When we started Portkey three years ago, the AI conversation was almost entirely about models — which one was smarter, faster, less prone to hallucinations. We were thinking about something less glamorous: what happens after a company decides to actually run AI in production? Who handles the API failures, the rate limits, the spend? We built Portkey for the moment when a company realizes its entire operation now depends on a system it can’t fully see or control.

Since then, our team has been hard at work to create an AI control plane for the enterprise. Already, we process over 1Trillion tokens and 150M+ AI requests daily and manage over $1M+ million in daily AI spend. We support 24,000+ organizations worldwide, including Fortune 500 enterprises across finance, pharma, and technology.

Our $15 million Series A funding was a big milestone for us, really solidifying the value of that initial vision and helping us to continue our momentum. This capital will help us scale our infrastructure and continue strengthening the governance layers necessary for the next frontier: autonomous agentic workflows that require even stricter reliability and budget guardrails.

What product goals and innovations would you like to discuss in this chat, those that end users can look forward to in the tool through 2026?

Our north star for 2026 is straightforward: make production-grade AI governance accessible to every team, not just the ones with big enough budgets to afford it.

We’ve taken a major step this year to make production AI infrastructure accessible: we open-sourced our enterprise-grade Gateway, bringing governance, observability, authentication, and cost controls into a single, unified layer available to every team. This release also includes the MCP Gateway, extending that same foundation to how AI agents interact with external tools and systems.

The idea is simple: as agents begin to query databases, trigger workflows, and take action across enterprise environments, those interactions need to be governed, observable, and secure by default — with one gateway that sits in the path of every request.

Also Read: AiThority Interview with Glenn Jocher, Founder & CEO, Ultralytics

The bigger bet is on agentic governance. The industry is rapidly moving past AI-as-chatbot toward autonomous agents that actually do things — access data, execute transactions, make decisions with real consequences. That multiplies the risk surface in ways most organizations aren’t prepared for. We’re building real-time governance & resiliency — so agents can operate with full autonomy precisely because the infrastructure around them is inherently secure, observable, and accountable. The goal isn’t to slow agents down. It’s to make them safe enough to truly let go.

How are unified control planes more beneficial for enterprise-grade AI?

Here’s the pattern we see over and over: a company starts with one team running one model for one use case. Within a year, there are forty teams, six model providers, no shared security posture, and nobody who can answer the question “how much are we spending on AI?” without a two-week audit. It’s a natural consequence of decentralized innovation, but it becomes ungovernable fast.

A unified control plane solves this by sitting directly in the path of all AI traffic — not as a bolt-on dashboard, but as an active layer through which every request flows. Engineering gets reliability: intelligent routing, automatic failover, latency optimization. Finance gets accountability: real-time spend tracking, budget enforcement, chargeback attribution. And leadership gets answers — which is often the hardest thing to come by.

The real power is in what “in-path” means operationally. It means you can enforce data policies in real-time — preventing proprietary information from leaving the organization — without relying on humans to remember the rules. It means that when a provider goes down or changes their pricing overnight, the control plane handles the failover or cost optimization automatically, and business-critical functions keep running without anyone having to scramble. AI stops being a collection of fragile, disconnected API calls and starts behaving like a managed utility — which is what it needs to be if it’s going to be load-bearing infrastructure.

As companies look to integrate an AI layer across most functions, what tips and best practices should they keep in mind?

First — establish governance and budget guardrails before you scale, not after. It is significantly harder to retroactively claw back AI access or secure data once it has already permeated an organization. Centralized LLM provider management and automated cost controls should be table stakes from the start, not an afterthought.

Second — plan for multi-model complexity. The best model today may be obsolete in six months, and the reality is that most enterprises are already running multiple models across multiple teams for multiple use cases — image generation, code completion, customer-facing chat. Each one you add increases operational complexity and the surface area for failure. Building on a flexible, provider-agnostic layer ensures you’re not locked into any single ecosystem and that reliability doesn’t degrade as you scale.

Finally, think about access at scale early. One thing we see consistently is that AI starts in pockets: one team, one use case. Then suddenly the question becomes, how do you roll this out to a thousand engineers, or ten thousand employees, without chaos? That transition from individual experimentation to org-wide deployment is where a lot of companies get caught flat-footed.

Can you talk about some of the most innovative AI developments from around the world that have piqued your interest and why?

What fascinates me most right now is the emergence of AI agent patterns that actually work — not as demos, not as research previews, but as tools people use every day and build real workflows around. We’ve spent years talking about autonomous agents in the abstract. In 2026, they arrived.

Related Posts
1 of 20,939

Take Claude Code, Anthropic’s agentic coding tool. It doesn’t just suggest code — it reads your entire codebase, plans multi-step changes across files, runs tests, opens PRs, and iterates on its own output. Apple just integrated it into Xcode. Developers are using it to ship entire features while they sleep. That’s not autocomplete. That’s an agent with genuine autonomy operating inside a professional workflow, with the guardrails to make it safe. It represents a new pattern: AI that is deeply embedded in a developer’s existing toolchain, operating with full context of the project, and taking consequential actions — not just generating text.

Then there’s OpenClaw — the open-source personal AI agent that went from an Austrian developer’s side project to 247,000 GitHub stars and adoption from Silicon Valley to Shenzhen in a matter of weeks. OpenClaw runs locally, connects to your messaging apps, calendars, and file systems, and executes tasks autonomously with persistent memory. People are using it to manage emails, automate workflows, and even build applications through natural language conversation on WhatsApp. Jensen Huang called it one of the most important software releases ever. Chinese tech giants are building entire product suites on top of it. A local government in Shenzhen has drafted policy to support its adoption.

But here’s why these developments are so interesting from an infrastructure perspective: both Claude Code and OpenClaw expose exactly the governance gap we’ve been talking about. These agents have system-level access — they can read files, execute commands, spend money, and interact with external services. OpenClaw has already faced security incidents, prompt injection vulnerabilities, and consent controversies. The more capable and autonomous agents become, the more critical it is to have an infrastructure layer that governs what they can access, what they can do, and what happens when something goes wrong. The agent era isn’t coming — it’s here. And the infrastructure to make it safe at enterprise scale is the urgent, unsolved problem.

For teams to make the AI they use or the AI workflows they build more truthful and accountable: what should be kept in mind?

Building truthful and accountable AI requires treating every AI interaction as something worth auditing.

The first thing teams should put in place is comprehensive logging and a systematic audit trail. Every decision an AI system makes should be traceable back to the specific request, the model version used, and the data it accessed. When something drifts or hallucinates — and it will — you shouldn’t have to guess why. That level of granular visibility transforms failures from embarrassing surprises into predictable data points you can actually learn from and fix.

The second is real-time guardrails in the critical path of your traffic. Run automated checks to scan outputs for sensitive data, toxicity, or factual inconsistencies before they reach an end user. A governance layer that sits between your models and your users creates a safety net that operates continuously.

The third — and this one gets overlooked — is access governance. Accountability isn’t just about what the AI outputs. It’s about who has access to what, under what conditions, and whether you can revoke that access instantly if something goes wrong. As AI rolls out across an entire organization, the question of who authorized what becomes just as important as what the model actually said.

Ultimately, accountability is an infrastructure problem as much as it is a model problem. You can have the most accurate model in the world and still have no accountability if you can’t see what it’s doing.

Five last thoughts on AI before we wrap up?

  • AI is now the load-bearing wall:

AI is no longer a side project or a shiny demo; it has become a load-bearing wall for modern enterprise operations. When your customer support, underwriting, and dev tools run on LLMs, organizations can’t afford to dismiss failures as “bugs” when they can cause business-threatening outages.

  • The biggest AI challenges are actually engineering challenges:

While the media focuses on hallucinations and bias, the real-world killers of AI projects are basic infrastructure gaps like rate limits, silent API failures, provider volatility, and runaway spend. Success isn’t about having the best model; it’s about having the most resilient system to run it.

  • Governance is the secret to velocity:

There’s an assumption baked into most engineering cultures that governance slows you down — that security reviews, access controls, and budget oversight are the tax you pay for doing things responsibly. In production AI, that logic inverts. When teams have a unified control plane that handles security, cost accountability, and data privacy automatically, they ship faster — because they’re not afraid of breaking the bank or the law.

  • The “agentic frontier” requires a new command center:

With autonomous agents that can spend budgets and make decisions, the need for control becomes non-negotiable. Organizations need a single layer that not only observes what AI is doing but also governs what it can and can’t do in real-time.

  • The winners will be the companies that stop chasing the chaos.

The AI landscape changes weekly. New models, fluctuating prices, sudden deprecations — that’s the only constant. The companies that thrive won’t be the ones building custom fixes for every update. They’ll be the ones with a single, stable infrastructure layer that absorbs the chaos so their teams don’t have to.

Also Read: ​​The Infrastructure War Behind the AI Boom

[To share your insights with us, please write to psen@itechseries.com ]

Portkey is the production control plane for AI that never breaks: a unified platform that sits in the path of every model request and agent action to provide governance, observability, reliability, and cost control. As AI becomes critical infrastructure, Portkey gives engineering teams operational reliability while giving finance teams real-time visibility and accountability. Trusted by Fortune 500 enterprises across finance, pharma, and technology, Portkey manages more than $1M in LLM spend every day, governing more than 1 Trillion tokens per day. Backed by Elevation Capital and Lightspeed, Portkey is headquartered in San Francisco, CA.

Rohit Agarwal is the Founder and CEO of Portkey, Rohit is a two-time founder and product leader who started his first company straight out of school, which was later acquired by Freshworks. He went on to lead product and platform at Freshworks, helping build Freshdesk and several AI-driven products at global scale. Rohit later headed product at Pepper Content, where he built an AI content platform that reached over 500,000 users. The tooling challenges he encountered building AI applications at scale directly inspired the creation of Portkey in 2023. Portkey recently raised a $15 million Series A led by Elevation Capital with participation from Lightspeed.

Comments are closed.