Mirantis Brings Enterprise-Grade Controls to AI Infrastructure
k0rdent adds capabilities enabling enterprises and GPU cloud operators to govern, scale, and monetize sovereign AI services
Mirantis, delivering Kubernetes-native infrastructure for AI, announced additional capabilities for k0rdent AI, further expanding the platform beyond infrastructure management to help enterprises, neoclouds, and GPU cloud operators monetize AI infrastructure investments.
Also Read: AIThority Interview With Rohit Agarwal, Founder & CEO of Portkey
The new k0rdent AI Model Registry and k0rdent AI Inference Mesh enable organizations to securely host, govern, route, and meter AI models and inference services across federated computing resources. Together, the two new products help organizations transform raw GPU infrastructure into governed, revenue-generating AI platforms.
Mirantis also introduced k0rdent AI Inference Runtime designed to maximize tokens per GPU-second for improved infrastructure efficiency and utilization.
“As organizations move AI projects from experimentation into production, infrastructure teams are increasingly confronting operational and governance challenges around model distribution, inference visibility, compliance enforcement, and GPU economics,” said Kevin Kamel, vice president of product development at Mirantis. “Enterprises and GPU operators have largely been forced to stitch together fragile workflows and disconnected tools to operationalize AI. Models cannot be treated the same as containers because they have their own governance, sovereignty, compliance, and lifecycle requirements. The capabilities we’re providing today are validated and benchmarked for users.”
k0rdent AI Model Registry
k0rdent AI Model Registry is optimized for AI model storage and distribution workflows. It provides a secure, OCI-native registry for managing large language models (LLMs), fine-tuned variants, quantized builds, and related AI artifacts across distributed infrastructure.
The registry reduces the operational complexity often associated with secure AI model distribution.
k0rdent AI Inference Mesh
k0rdent AI Inference Mesh routes, meters, audits, and enforces policy on every inference request across models, regions, clusters, and providers. It provides a full view of where AI requests are going, what they cost, and any compliance gaps.
The new products build on Mirantis’ k0rdent AI platform, which focuses on Kubernetes-native AI infrastructure spanning bare metal, virtual machines, managed Kubernetes, and sovereign clouds.
Also Read: AI-Driven Risk Intelligence: How FIs Are Predicting Systemic Shocks
[To share your insights with us, please write to psen@itechseries.com]

Comments are closed.