cnvrg.io Collaborates With Red Hat to Deliver an Accelerated Production ML Workflow With MLOps
cnvrg.io, the data science platform simplifying model management and introducing advanced MLOps to the industry, today announced collaboration with Red Hat on Red Hat Openshift to accelerate ML workflows and provide data scientists and DevOps with everything they need out of the box. cnvrg.io is now part of the Red Hat OperatorHub as a certified platform to deliver AI lifecycle management, and simplified MLOps to enterprise DevOps and data science teams across industries. This announcement follows cnvrg.io’s integration with NVIDIA NGC’s registry of GPU-optimized AI software, providing IT, data scientists and engineers a complete MLOps and model management solution.
Today’s enterprise ML development is fragmented and broken. Between all the many tools, scripts, plug-ins and disconnected stacks, ML developers and data scientists spend over 65% of their time on DevOps and managing infrastructure resources requests and hybrid cloud compute. This manual and labor intensive work pulls them from doing what they were hired to do – deliver high impact ML models. Organizations are increasingly operationalizing containers and Kubernetes to accelerate ML, giving data scientists and developers the much needed agility, flexibility, and scalability to manage their ML workflow from research to production. Together with cnvrg.io and Red Hat OpenShift, data scientists, IT and DevOps teams are empowered to better manage infrastructure in the hybrid cloud and accelerate the ML workflow in one automated and unified platform.
Recommended AI News: Sigma Computing Names Orla Clifford Vice President Of Operations
The OpenShift and cnvrg.io optimized solution provides one command center for all ML/AI infrastructure, from research to deployment. Openshift serves as the control plane for the infrastructure while cnvrg.io provides the command center for all machine learning assets, model management and production ML. OpenShift helps to provide agility, flexibility, portability and scalability across the hybrid cloud, from cloud infrastructure to edge computing deployments. Paired with cnvrg.io’s end-to-end MLOps solution, data science teams will be able to develop and deploy ML models and intelligent applications into production faster.
Together with Red Hat OpenShift, cnvrg.io provides the tools a data scientist and DevOps team needs, out of the box:
- Managed Kubernetes deployment on any cloud or on-premises environment
- Fully automated installation and life cycle management
- All tools data scientists need for ML/AI development: from research to deployment
- Open & flexible, code-first data science platform, which integrates open source tools
Recommended AI News: Area 1 Security Raises $25 Million In Funding Led By ForgePoint Capital
“The AI infrastructure ecosystem is growing rapidly. We’re collaborating with cnvrg.io as part of our OperatorHub to help provide an end-to-end MLOps solution that data scientists, IT and DevOps engineers need to effectively manage, build and deploy machine learning at the enterprise level,” says Tushar Katarki, product manager, Red Hat OpenShift. “cnvrg.io is a great choice for OpenShift and GPU’s to run ML workloads, and enables enhanced collaboration between data scientists and engineers for accelerated ML deployment across hybrid cloud.”
“Red Hat OpenShift is an excellent foundational technology for cnvrg.io, as it provides powerful automation for Kubernetes clusters and infrastructure life cycle management, while cnvrg.io adds advanced AI/ML capabilities from model management, to training to production ML, enhancing the power of Openshift Kubernetes infrastructure with a strong and native integration,” says Yochay Ettun, CEO and Co-founder of cnvrg.io. “This collaboration can help data scientists and IT organizations to operationalize their machine learning and dramatically accelerate time to production.”
Recommended AI News: Innoviti Enhances ESOP Pool To US $10 Million