Run:AI Achieves Red Hat Certification For OpenShift
Users of Red Hat OpenShift can now more easily install Run:AI’s Compute Management Platform by using a Red Hat Certified Operator built for their Kubernetes clusters, improving AI development, speed and efficiency
Run:AI, creator of the first cloud-native compute orchestration platform for AI, announced the completion of certification for their Red Hat OpenShift Operator. The certification enables companies using Graphics Processing Units (GPUs) on OpenShift-managed Kubernetes clusters to more quickly and simply install Run:AI’s software. The Operator is available from the Red Hat Container Catalog.
Recommended AI News: Masterminds of Hip Hop, the First NFT Collection Celebrating the Original Pioneers of Hip Hop Music
Using Run:AI’s OpenShift Operator, companies can more easily install Run:AI, enabling the pooling of GPU resources so that compute-intensive AI workloads for Deep Learning can access the processing power that they need. This ultimately leads to faster experimentation, more accurate models, and increased business value from AI. Data science teams build and train AI models and then use them for inferencing; Run:AI ensures researchers can dynamically access GPU based on the unique compute resource needs of their jobs.
Run:AI’s Platform includes a batch scheduler for Kubernetes, now seamlessly integrated with OpenShift, which brings advanced queuing, quotas, priority management, policy creation, automatic pause/resume, multi-node training and fractional GPU capabilities (more than one job can share the same GPU) to Red Hat OpenShift. The platform maximizes resources, reduces wasteful idle GPU and ultimately brings AI initiatives to market faster.
A containers-based approach helps applications run and behave the same way regardless of the underlying infrastructure. This gives companies the flexibility to run their workloads on-premises or in any public or private cloud with improved portability and confidence that their applications and data are running in a more efficient, cost effective way.
Recommended AI News: PureSoftware Further Expands Its Global Footprint With The Opening Of Office In Kenya, Africa
Because of their intense computational and data throughput needs, AI workloads are often run in on-premises Kubernetes clusters, making OpenShift one of the best platform choices to manage a private cloud for an organization’s entire Data Science function. As an OpenShift Operator, Run:AI is officially tested, verified, and supported as enterprise-grade Red Hat certified software.
“We’re already working with customers who use Red Hat OpenShift as the platform to manage their Kubernetes clusters for Deep Learning,” said Omri Geller, Run:AI’s CEO and co-founder. “With our OpenShift Operator certification, it’s never been easier to get started with Run:AI and pool GPU resources across an organization to ensure that expensive resources are fully maximized. Efficient use of GPU resources means faster training, faster development and ultimately, more accurate AI.”
“We’re pleased to have Run:AI as a Red Hat OpenShift Certified Operator available via the Red Hat container catalog,” said Mike Werner, senior director, Technology Partnerships, Red Hat. “Certification indicates that partner Operators work as intended on Red Hat OpenShift deployments, and we’re pleased to further customer choice when it comes to powering AI deployments on Red Hat OpenShift with the addition of Run:AI.”
Recommended AI News: Flip Or Fold. Either Way, You’ll Fly On T-Mobile America’s 5G Leader.
Comments are closed.