AiThority Interview Featuring: Pranav Nambiar, Senior Vice President of AI/ML and PaaS at DigitalOcean
Pranav Nambiar, Senior Vice President of AI/ML and PaaS at DigitalOcean chats about the current trends dominating the AI and Cloud tech market in this AiThority interview.
________
Take us through your journey in SaaS and DigitalOcean?
I work as the Senior Vice President of AI/ML and PaaS for DigitalOcean, leading the strategy and execution of DigitalOcean’s portfolio of PaaS services (including Kubernetes, Databases, Data and application platforms) and AI/ML initiatives (Gradient AI stack). I just recently completed my first year at DigitalOcean, and it has been an exciting journey so far. In the past year alone, we have launched over 150 features and strengthened DigitalOcean’s position as one of the largest Cloud vendors, offering innovative experiences with 30%+ savings on the cost of ownership.
Prior to DigitalOcean, I served in leadership roles across Google Cloud (driving their Data Cloud and Generative AI platforms), AWS (leading their most strategic services such as DynamoDB, ElasticSearch, and SageMaker, and growing them into one of the largest services in their portfolio), and Microsoft (driving multiple products across Windows, Office, Bing search, and mobile). Overall, I would classify my journey into four phases:
- 1) PC wave (Windows and Office),
- 2) Mobile and Social wave (Bing Search, Microsoft mobile apps),
- 3) Cloud wave (AWS, GCP, DO),
- 4) AI wave (AWS, GCP, and DO).
I think I have been fortunate to have had a rewarding journey so far, being part of all four critical waves our industry has experienced. As we navigate the latest AI wave, I am really excited to be part of DigitalOcean and am certainly looking forward to playing a key role in shaping our future with AI.
Also Read: AiThority Interview with Jonathan Kershaw, Director of Product Management, Vonage
What are some of the current trends dominating the AI and Cloud market?
1.AI infrastructure efficiency race
Model vendors, cloud providers, and hardware manufacturers are in an arms race to cut costs and compute through techniques such as optimized kernels, quantization, sparse models, custom silicon, and disaggregated memory. These innovations are greatly reducing training and inference cost curves, widening access for mid-market adopters.
2. Shift towards open models
Open-weight and smaller task-tuned models are closing the quality gap with frontier LLMs while inference costs plunge 50-90%+. This is encouraging more companies to adopt open models, which gives them better control over costs, distribution, data workflows, and trust.
3.Agentic automation is becoming the new focus
We’re moving beyond chatbots to autonomous agents that reason, chain tools, and execute end-to-end processes. This unlocks a new class of AI-native SaaS that replaces or layers over ERP, CRM, and service-desk tools, shifting value from UI-driven software to workflow-driven agents. Protocols such as MCP and A2A will expedite this transition.
AI-driven development is becoming the norm
Most developers are embracing AI-based tools to help enhance their speed of development and drive efficiencies. This includes both new developers who want low-code/ no-code experiences and seasoned developers who are more hands-on with code.
Can you share a few thoughts on how you feel these segments will evolve in the future?
1. Commoditization of AI infrastructure and models
AI infrastructure including API-level model access is going to become a low-margin utility. Companies are experimenting with various models to pick the one that best suits their cost and quality requirements.
2. Transition from infrastructure to platform and application layers of the stack
When the cloud came to being, the initial focus was on infrastructure. People looked at the cloud as a means to run their IT. With time, infrastructure became less of a differentiator, and companies shifted to platforms and SaaS applications to create durable differentiation. The same pattern is likely to play out with AI as well. The focus will soon shift towards agentic platforms and services, which will be even more crucial for companies to drive innovation.
3. Verticalization of foundational models
Generalized models are going to give way to vertical specific models. Industry-specific SaaS leaders in sectors like fintech, healthcare, and martech will develop proprietary fine-tuned foundation models optimized for their domain’s unique workflows, compliance requirements, and data patterns. These models will surpass general-purpose models by embedding specialized reasoning, regulatory guardrails, and deep industry context directly into their architectures, unlocking higher accuracy, trust, and differentiated business value.
4. AI everywhere
As companies transition away from a centralized model of AI innovation, where a central team drives AI execution and innovation, to a more decentralized model where every team or product in the company will look to embed AI., even internal workflows, such as HR processes in companies, will start to adopt AI. Essentially, we will move towards a world where AI is everywhere.
5. Agentic Cloud
AI is going beyond just coding assistance to end-to-end application development and management. Today, developers are using AI to help build applications, but once they are deployed, there is limited AI support. We are moving to a world where the cloud as a whole will be reinvented with AI, where agents help with all phases of app development, right from design and implementation, to management and optimization.
Take us through the brief highlights of your Gradient AI platform?
DigitalOcean Gradient AI is an all-inclusive AI cloud that provides Infrastructure, Platform, and Agentic components to support companies as they realize the full potential of agents on the cloud. What makes Gradient AI stand out is that it is not only extremely cost-effective compared to alternatives, but also drastically simplifies the developer experience, enabling companies to create production-ready agents with ease.
On the infrastructure front, not only does Gradient AI offer a wide variety of GPUs (both from NVIDIA and AMD), but it also provides up to a 143% increase in throughput and a 40% reduction in time to first token compared to generic setups (based on our internal benchmarks). To top it all, Gradient AI infrastructure can save up to 75% in costs compared to other cloud providers.
On the platform front, Gradient AI offers a one-stop shop for the entire agent development lifecycle for customers. The platform offers models, tools, orchestration, guardrails, evaluations, observability, and easy deployments – all in one place. Some salient aspects to note are:
- The platform offers serverless inferencing for models with up to 70% better price-performance compared to options such as AWS Bedrock. Customers are not locked into any single model provider and can easily switch between models and try out various options for their agents.
- The managed Knowledgebase service is up to 16% more accurate and covers 14% more content types than AWS Bedrock.
- Built-in guardrails and evaluations make transitioning from pilot to production a much more accelerated process. In our internal tests, we have seen up to a 30% increase in accuracy with a 50% reduction in time to production with our evaluations.
- To troubleshoot issues, an area where most developers get stuck, the platform not only enables developers to view traces but also provides recommendations on changes that need to be made to improve the agents.
- On the deployments front, the platform seamlessly integrates with Git and handles all aspects such as CI/CD pipelines, auto-scaling, and VPC with ease.
- With our upcoming Agent Development Kit, the platform will provide an easy code-first experience for developers to access all these features via code.
- We are also introducing a subscription-based pricing that will not only provide price transparency but also up to 70% cost savings for building agents.
A few misconceptions around AI you’d like to bust before we wrap up?
- Piloting agents and bringing them to production are equally easy
Many developers start by piloting agents with simple use cases and examples. The simplicity of getting started makes them assume that they can easily bring the agent to production, but that is far from the truth. Most developers struggle to transition from pilot to production since they miscalculate the unpredictability of agents. To address this, one needs unified platforms to build agents quickly and raise the bar on their accuracy and reliability to get it to production.
- The agent bubble may burst, delaying agent investments may be better
Some developers are easily giving up on agents, thinking that it is just a bubble and will not last long. They are far from the truth, since agents are here to stay. If one really needs to differentiate, one needs to invest in agents and iterate continuously. If not, one risks being left behind by the competition that continues to invest in agents.
- AI’s business impact is far away
Generative AI is already reshaping every function, including code, design, marketing, support, and R&D. The question is not if but how fast organizations adapt their workflows. AI can deliver an all-around impact for businesses, right from internal processes and utilities to external products and services.
5 AI innovators (companies/people) you’d like to shout out to in this Q&A?
1.Google
Google has played a foundational role in paving the way for the Generative AI wave through its development of Transformer models, which revolutionized how machines understand and generate language.
2. NVIDIA and AMD
Our entire AI revolution is built on the processing power delivered by GPUs. NVIDIA created the backbone infrastructure for this with their GPUs, and was soon followed by AMD. Without these two companies building GPUs at scale, AI would not be at the level it is today.
3. OpenAI – ChatGPT
Generative AI was brought into the mainstream by OpenAI with ChatGPT, which became the fastest-growing consumer app in history, and played a key role in showcasing the art of the possible to the world. OpenAI models have since been powering several applications and are continuously evolving at a rapid pace.
4. Anthropic
Brought in a new focus towards AI safety and transparency, pioneering the AI Safety Levels (ASL) framework and the MCP protocol to make secure, interoperable AI integration widely accessible.
5. DeepSeek
Emerged as a disruptive force by driving accessibility and open innovation in large language models (LLMs). With their DeepSeek models, they not only democratized access to high-performance AI but also challenged several existing LLM paradigms, bringing the whole world’s attention to the potential of innovative open weight models.
Also Read: What is Shadow AI? A Quick Breakdown
[To share your insights with us, please write to psen@itechseries.com]
Pranav Nambiar, is Senior Vice President of AI/ML and PaaS at DigitalOcean.
DigitalOcean simplifies cloud computing so businesses can spend more time creating software that changes the world.
Comments are closed.