[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Latest Mirantis OpenStack for Kubernetes Powers AI-Ready Private and Sovereign Clouds at Scale

Mirantis Logo

Smarter networking, air-gapped support, and bare-metal enhancements help enterprises meet the growing demands of AI and secure infrastructure

Related Posts
1 of 42,271

Mirantis, the Kubernetes-native AI infrastructure company enabling enterprises to build and operate scalable, secure, and sovereign AI infrastructure across any environment, announced availability of Mirantis OpenStack for Kubernetes (MOSK) 25.2 that simplifies cloud operations and strengthens support for GPU-intensive AI workloads as well as traditional enterprise applications.

“With the latest MOSK, organizations can scale GPU-powered workloads, along with the ability to support secure, disconnected operations that don’t sacrifice openness or flexibility.”

As AI adoption accelerates, organizations face growing pressure to scale infrastructure that can handle high-throughput training workloads, ensure data control, and streamline orchestration. According to Deloitte, organizations are rapidly evolving their infrastructure to meet the scale and performance requirements of AI, including high-throughput training and data locality. These demands are prompting enterprises to reevaluate how they manage compute, networking, and storage across hybrid, sovereign and private clouds. MOSK 25.2 addresses these emerging needs with support for disconnected operations, simplified scale-out networking, and updates tailored for GPU-intensive and hybrid deployments.

Also Read: AiThority Interview With Dmitry Zakharchenko, Chief Software Officer at Blaize

“AI workloads mean big changes to general-purpose compute infrastructure,” said Artem Andreev, Senior Engineering Manager, Mirantis. “With the latest MOSK, organizations can scale GPU-powered workloads, along with the ability to support secure, disconnected operations that don’t sacrifice openness or flexibility.”

MOSK 25.2 makes it possible to run OpenStack clouds entirely offline where Internet access is prohibited. This supports sectors such as finance, government, and defense, where every artifact must be scanned and approved before entering the datacenter. Disconnected operations enable organizations to align with upstream innovation over time while preserving complete control of sensitive data—a key requirement for AI model training and sovereignty.

The release also delivers significant advances in networking, core platform components, observability, and user features, including:

  • OpenStack 2025.1 “Epoxy” is supported for new deployments and upgrades from 2024.1 “Caracal”.
  • Open Virtual Network (OVN) 24.03 delivers performance improvements and the latest security patches, plus a clear, validated path to move off Open vSwitch (OvS)—the long-used networking backend in OpenStack—toward a more modern and scalable model. As an alternative, OpenSDN 24.1 is available with a modernized codebase and expanded IPv6 capabilities.
  • Scale-out networking & proactive network health — full L3 networking on bare metal to scale across racks without VLAN stretch, plus network infrastructure monitoring with connectivity checks and alerts to catch switch/routing issues early.
  • Hybrid AI infrastructure (VMs + Bare Metal) — features that make AI clouds easier to run: allowing to recover bare-metal GPU servers even if networking breaks, and to connect them to the right project networks alongside VMs for high-performance training.

MOSK continues to enable enterprises to run on-premises private clouds for both cloud-native and traditional workloads with reliability, automation, and complete control over application data. The platform manages the full lifecycle of infrastructure—from bare-metal provisioning to software configuration—while providing centralized logging, monitoring, and alerting.

Also Read: Neuro-Symbolic AI Cities – Designing “Thinking Cities”

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.