Run:ai Completes Proof of Concept with NVIDIA to Maximize GPU Workload Flexibility on Any Cloud
Run:ai deployed on NVIDIA VMIs enables multi-cloud scaling as well as ‘lift & shift’ cloud deployments
Run:ai, the company simplifying AI infrastructure orchestration and management, announced details of a completed proof of concept (POC) which enables multi-cloud GPU flexibility for companies using NVIDIA GPUs in the cloud. NVIDIA’s software suite includes virtual machine images, or VMIs, which are optimized for NVIDIA GPUs running in clouds such as Amazon Web Services, Microsoft Azure, Google Cloud, and Oracle Cloud. Run:ai software deployed on NVIDIA VMIs enables cloud customers to move AI workloads from one cloud to another, as well as to use multiple clouds simultaneously for different AI workloads with zero code changes.
Latest Aithority Insights : Accenture Invests in Good Chemistry Company to Help Drive Quantum Computing Advancements in…
Run:ai’s workload-aware orchestration ensures that every type of AI workload gets the right amount of compute resources when needed, and provides deep integration into NVIDIA GPUs to achieve optimal utilization of these resources. Run:ai’s Kubernetes-based Atlas platform and NVIDIA VMIs were used together in the POC to support ‘lift & shift’ as well as multi-node scaling in the cloud. NVIDIA customers and partners can de-risk their AI cloud deployments with a streamlined and portable solution for cloud AI infrastructure from Run:ai. Customers looking to cost-optimize their cloud computing resources can choose among supported cloud providers for the best-fit configuration. They can also manage AI workloads on multiple clouds with a single control plane.
NVIDIA VMIs are available on each of the major public cloud providers. NVIDIA publishes these with regular updates to both OS and drivers. The VMIs are optimized for performance on the latest generations of NVIDIA GPUs and allow for easy and fast deployment of GPU-accelerated instances on the public cloud.
Browse The Complete News About Aithority: Cado Security Extends Support To Serverless Environments
“By combining accelerated computing power from NVIDIA with Run:ai’s Atlas platform, organizations have a stellar AI foundation that enables them to successfully deliver on their AI initiatives,” said Omri Geller, CEO and co-founder of Run:ai. “We appreciate the close relationship we have with the NVIDIA cloud team and their commitment to support NVIDIA accelerated computing customers everywhere.”
“From innovative startups to world-leading enterprises, NVIDIA-accelerated cloud computing provides customers with flexible options for powering their most demanding workloads,” said Paresh Kharya, senior director, Accelerated Computing at NVIDIA. “Paired with NVIDIA-accelerated instances from leading cloud service providers, the Run:ai Atlas platform helps customers maximize the efficiency and value of AI workload operations.”
The Run:ai Atlas Platform brings simplicity to GPU management by providing researchers with on-demand access to pooled resources for any AI workload and has built-in integration with NVIDIA Triton Inference Server, NVIDIA’s open source inference serving software that lets teams deploy trained AI models from any framework on GPU or CPU infrastructure.
Read More About Aithority News : Red Sift Strengthens Email Security For Cybersecurity-First Organizations Hosted on Microsoft Azure
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.