Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Domino Data Lab Extends Enterprise MLOps to the Edge with New NVIDIA Fleet Command Support

Domino Data Lab, provider of the leading Enterprise MLOps platform trusted by over 20% of the Fortune 100, announced new integrations with NVIDIA that extend fast and flexible deployment of GPU-accelerated machine learning models across modern tech stacks – from data centers to dash cams.

Domino is the first MLOps platform integrated with NVIDIA Fleet Command, enabling seamless deployment of models across edge devices, in addition to Domino’s recent qualifications for the NVIDIA AI Enterprise software suite. New curated MLOps trial availability through NVIDIA LaunchPad fast-tracks AI projects from prototype to production, while new support for on-demand Message Passing Interface (MPI) clusters and NVIDIA NGC streamline access to GPU-accelerated tooling and infrastructure, furthering Domino’s market-leading openness.

“Streamlined deployment and management of GPU-accelerated models bring a true competitive advantage,” said Thomas Robinson, VP of Strategic Partnerships & Corporate Development at Domino. “We led the charge as the first Enterprise MLOps platform to integrate with NVIDIA AI Enterprise, NVIDIA Fleet Command, and NVIDIA LaunchPad. We are excited to help more customers develop innovative use cases to solve the world’s most important challenges.”

Recommended AI News: Advyzon Launches Turnkey Asset Management Program, Advyzon Investment Management (AIM)

Edge Device Support Streamlines Model Deployment across Modern Tech Stacks through MLOps

Domino’s new support for the Fleet Command cloud service for edge AI management further reduces infrastructure friction and extends key enterprise MLOps benefits — collaboration, reproducibility, and model lifecycle management — to NVIDIA-Certified Systems in retail stores, warehouses, hospitals, and city street intersections.

Available now, this integration relieves data scientists of IT and DevOps burdens as they manage, build, deploy, and monitor GPU-accelerated models at the edge. Data scientists can quickly iterate on models using Domino’s Enterprise MLOps Platform, then use Fleet Command to orchestrate the edge AI lifecycle using the turnkey solution to streamline deployments, manage over-the-air updates, and monitor models with minimal infrastructure footprint.

Recommended AI News: Logpoint unleashes SaaS-delivered Converged SIEM

Accelerated Proof-of-Concepts with the First MLOps Platform on NVIDIA LaunchPad

Related Posts
1 of 41,052

Further deepening Domino’s collaboration with NVIDIA to accelerate model-driven business, the company’s Enterprise MLOps platform is also now the first available through the NVIDIA LaunchPad program. LaunchPad enables enterprises to get immediate, short-term access to NVIDIA AI Enterprise on VMware vSphere with Tanzu running on private accelerated compute infrastructure and curated labs.

Teams can use LaunchPad to quickly test AI initiatives on the complete stack underpinning joint Domino and NVIDIA AI solutions, and can get hands-on experience from a lab that demonstrates how to scale data science workloads with Domino’s Enterprise MLOps platform. This experience instantly delivers MLOps benefits —collaboration and reproducibility— optimized and pre-configured for purpose-built AI infrastructure. With proof-of-concepts in LaunchPad validated by Domino and NVIDIA, teams get the confidence to deploy at production scale on the same complete stack they can purchase.

“Enterprise AI requires fast iteration with seamless, flexible model deployment to deliver results that make an impact for businesses,” said Manuvir Das, VP of Enterprise Computing at NVIDIA. “NVIDIA’s collaboration with Domino helps customers accelerate time-to-value for their AI investments, with deployment options for data scientists and developers across every stage of their AI journey.”

Recommended AI News: One of World’s Largest Venue Management Companies Signs Million-Dollar Deal With Darktrace

Support for On-Demand MPI Clusters and NVIDIA NGC Streamlines MLOps with GPU-Optimized Software

Further new integrations bring the added Enterprise MLOps benefits of interactive workspaces, collaboration, reproducibility, and democratized GPU access to NVIDIA’s expanding portfolio of GPU-optimized solutions.

New support for on-demand MPI clusters allows data scientists to use NVIDIA DGX nodes in the same Kubernetes cluster as Domino. Available today for Domino environments and NGC  images, this new integration eliminates time wasted by data scientists on administrative DevOps tasks so they can start innovating on deep learning models.

Domino also now natively supports NVIDIA’s NGC catalog and NVIDIA AI platform. With a hub of AI frameworks (such as PyTorch or TensorFlow), industry-specific SDKs, and pre-trained models, this GPU-optimized AI software simplifies and accelerates end-to-end workflows. Data science teams can now run NGC containers in Domino while maintaining two-way code interoperability with raw NGC containers. Domino will continue to expand support for the NVIDIA AI platform through the new NVIDIA AI Accelerated program.

Recommended AI News: Credera Attains AWS Well-Architected Partner Program Membership

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.