AiThority Interview With Dmitry Zakharchenko, Chief Software Officer at Blaize
Dmitry Zakharchenko, Chief Software Officer at Blaize chats more about the growing use and benefits of Edge AI in this AiThority.com interview:
_________
Take us through your SaaS/AI journey and role at Blaize
My background is in executing cloud, mobile, and AI capabilities, as well as inventing first-to-market products and features. I have extensive experience connecting advanced technology to revenue growth by continuously innovating and rapidly transforming prototypes into customer-ready solutions.
Prior to joining Blaize, I worked at Intel, managing specialized data science and product engineering teams in cognitive computing, developing AI products for financial, manufacturing, health, and defense customers. Before Intel, I held a range of product management positions at Motorola Mobility, delivering differentiated mobility offerings and desktop software solutions.
I joined Blaize in 2018 as the VP, Head of Product Management and R&D AI Solution, and in 2024, became the company’s Chief Software Officer. Blaize’s innovative technology and its applications to solve real-world problems attracted me, and we are currently fulfilling that mission by leveraging AI at the edge to support smart cities, defense, healthcare, and more.
We’d love to hear more about Blaize’s latest AI enhancements
In August, Blaize launched its AI Platform, a purpose-built, edge-native solution engineered to efficiently and flexibly deliver multimodal intelligence for mission-critical workloads across a wide range of industries. With a focus on programmability and ease of deployment, the platform was designed to accelerate AI implementation without the complexity.
The AI Platform was designed to replace passive data collection with proactive intelligence at scale. Key features include:
- Multi-modal sensor fusion: Consolidates video, audio, telemetry, and more into unified insights.
- Simultaneous model execution: Processes multiple models concurrently without batching or bottlenecks.
- Comprehensive support of compact and scalable systems: Enables hybrid AI deployment across diverse environments and workloads, from rugged edge devices to workstations and rack mount servers.
- Collaborative software stack: Pre-integrated with domain-specific AI pipelines.
Read More on AiThority: Rise of the Autonomous Agents: When AI Starts Thinking for Itself
What do most organizations get wrong when they try to scale AI deployments and impact across their teams and business units?
Some organizations focus heavily on larger AI models like ChatGPT while overlooking the value of specialized AI. While larger AI models like ChatGPT are malleable, there is also a “learning gap” associated with these models that can prevent teams from unlocking their full potential. Teams should be able to recognize their limitations and admit when they need support to save time and valuable resources.
This is where specialized models can deliver tremendous business value. By identifying how and where purpose-built models can be deployed within real-world applications, teams can expect these models to deliver relevant outputs, resulting in greater ROI.
Can you share more about hybrid AI infrastructures and why there is a greater need for it today?
As data becomes more complex and multi-modal, the need for scalable, hybrid AI infrastructure has never been more necessary. Hybrid AI combines data-driven machine learning with rules-based, symbolic reasoning, enabling AI systems that not only think, but can interact safely and intelligently with the physical world in real-time. This is the foundation of “physical AI,” which is poised to transform industries. These specialized applications are the most effective way to extract value from AI, promising material improvements to the physical world.
Smart cities are a perfect example. Everyone—from drivers, to pedestrians, to law enforcement—wants safer roads, but when risky behaviors go unseen and emergency response depends on fast-acting bystanders, safety is an inexact science. Physical AI-enabled road cameras can change that, pre-empting accidents and traffic violations by identifying infractions, accidents, and other conditions, and deploying the appropriate response.
This is just one use case; untapped potential also exists in areas such as healthcare and retail. Hybrid AI is transforming decision-making where it matters most, powering faster, smarter responses that enhance safety, boost efficiency, and improve everyday experiences for customers, patients, commuters, and beyond.
What should businesses keep in mind as they invest in or evaluate ready to use AI solutions at the edge?
As businesses consider AI solutions, it’s important to understand the differences between edge and hybrid AI. Edge AI runs AI models directly on devices such as cellphones or cameras, enabling them to instantly process data without relying on a central cloud or data center. Hybrid AI, on the other hand, combines different approaches—such as machine learning with rules-based reasoning, or edge processing with cloud computing—to deliver more flexible and powerful real-world applications.
Edge AI transforms decision-making where it matters most by enabling faster and smarter responses that enhance safety, boost efficiency, and improve everyday experiences. With ready-to-use AI solutions at the edge, businesses can unlock these benefits, achieve impactful results, and maintain a competitive edge in their industries.
Five thoughts on the current state of AI and infrastructure you’d leave us with before we wrap up?
- As data becomes more complex and multi-modal, the need for scalable, hybrid AI infrastructure grows increasingly critical.
- Many industries are missing out on AI’s full potential. However, Edge AI is transforming decision-making where it matters most, enabling faster and smarter responses that enhance safety, boost efficiency, and improve everyday experiences.
- AI models are becoming smaller, faster, and smarter—perfectly suited for the edge. As the demand and necessity for immediate feedback grows, more organizations will begin to recognize the value of edge AI.
- AI is entering its next evolution—just as computing moved from mainframes to personal devices, AI is now transitioning from the cloud to the edge.
- Traditionally, AI has relied on cloud data centers to process information. Older models for devices like cameras or sensors would have their data processed in big data centers in the cloud. However, this approach often results in latency and high bandwidth consumption, making it unsuitable for immediate, real-time decisions. Individuals need AI where they operate, not where cloud providers dictate
Catch more AiThority Insights: AiThority Interview with Arturo Buzzalino – Group Vice President and Chief Innovation Officer at Epicor
[To share your insights with us, please write to psen@itechseries.com
Blaize is a leading provider of a purpose-built, full-stack hardware architecture and no-code software platform that reduces dependency on Data Scientists.
Dmitry Zakharchenko is Chief Software Officer at Blaize, with a proven record of executing cloud, mobile, and AI capabilities and inventing first-to-market products and features. Dmitry’s expertise is in connecting advanced technology to revenue by delivering a predictable cadence of releases and clear roadmap that ensures a smooth flow of product from rapid prototypes to final commercialization. Prior to joining Blaize, Dmitry was at Intel where he managed specialized data science and product engineering teams in the field of cognitive computing to deliver AI products for financial, manufacturing, health and defense customers. Before Intel, he held a range of product management positions at Motorola Mobility, delivering differentiated mobility offerings and desktop software solutions.
Comments are closed.