Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Run:ai Certified to Run NVIDIA AI Enterprise Software Suite

Run:ai, the leader in compute orchestration for AI workloads, announced that its Atlas Platform is certified to run NVIDIA AI Enterprise, an end-to-end, cloud-native suite of AI and data analytics software that is optimized to enable any organization to use AI.

“The certification of Run:AI Atlas for NVIDIA AI Enterprise will help data scientists run their AI workloads most efficiently,” said Omri Geller, CEO and co-founder of Run:ai. “Our mission is to speed up AI and get more models into production, and NVIDIA has been working closely with us to help achieve that goal.”

With many companies now operating advanced machine learning technology and running bigger models on more hardware, demand for AI computing chips continues to grow. GPUs are indispensable for running AI applications, and companies are turning to software to reap the most benefit from their AI infrastructure and get models to market faster.

Recommended AI News: Calumino Announces Series A Funding Round to Scale First-of-its-Kind Intelligent Thermal Sensing Platform

The Run:ai Atlas Platform uses a smart Kubernetes Scheduler and software-based Fractional GPU technology to provide AI practitioners seamless access to multiple GPUs, multiple GPU nodes, or fractions of a single GPU. This enables teams to match the right amount of computing power to the needs of every AI workload, so they can get more done on the same chips. With these capabilities, Run:ai’s Atlas Platform lets enterprises maximize the efficiency of their infrastructure, avoiding a scenario where GPUs sit idle or use only a small amount of their power.

Related Posts
1 of 40,906

“Enterprises across industries are turning to AI to power the breakthroughs that will help improve customer service, boost sales and optimize operations,” said Justin Boitano, vice president of enterprise and edge computing at NVIDIA. “Run:ai’s certification for NVIDIA AI Enterprise provides customers with an integrated, cloud-native platform for deploying AI workflows with MLOps management capabilities.”

Recommended AI News: CloudFabrix Announces the Availability of Composable Analytics, Dashboards and Pipelines to Accelerate AIOps and Observability Adoption

Run:ai creates fractional GPUs as virtual ones within available GPU framebuffer memory and compute space. These fractional GPUs can be accessed by containers, enabling different workloads to run in these containers — in parallel and on the same GPU. Run:ai works well on VMware vSphere and bare metal servers, and supports various distributions of Kubernetes.

This certification is the latest in a series of Run:ai’s collaborations with NVIDIA. In March, Run:ai completed a proof of concept which enabled multi-cloud GPU flexibility for companies using NVIDIA GPUs in the cloud. This was followed by the company fully integrating NVIDIA Triton Inference Server. And in June, Run:ai worked with Weights & Biases and NVIDIA to gain access to NVIDIA-accelerated computing resources orchestrated by Run:ai’s Atlas Platform.

Recommended AI News: Researchers from Gwangju Institute of Science and Technology Develop a New Method for Denoising Images

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.