Iterative to Launch Open Source Tool, First to Train Machine Learning Models on Any Cloud Using HashiCorp’s Terraform Solution
The Terraform Provider Iterative (TPI) simplifies training on any cloud and saves significant time and money in maintaining and configuring compute resources.
Iterative, the MLOps company dedicated to streamlining the workflow of data scientists and machine learning (ML) engineers, announced a new open source compute orchestration tool using Terraform, a solution by HashiCorp, Inc., the leader in multi-cloud infrastructure automation software.
Latest Aithority Insights: Mozart Data Raises $15 Million to Make Data Infrastructure and Tools Accessible to Start-Ups
“TPI extends Terraform to fit with machine learning workloads and use cases. It can handle spot instance recovery and lets ML jobs continue running on another instance when one is terminated.”
Terraform Provider Iterative (TPI) is the first product on HashiCorp’s Terraform technology stack to simplify ML training on any cloud while helping infrastructure and ML teams to save significant time and money in maintaining and configuring their training resources.
Built on Terraform by HashiCorp, an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services, TPI allows data scientists to deploy workloads without having to figure out the infrastructure.
Data scientists oftentimes need a lot of computational resources when training ML models. This may include expensive GPU instances that need to be provisioned for training an experiment and then de-provisioned to save on costs. Terraform helps teams to specify and manage compute resources. TPI complements Terraform with additional functionality, customized for machine learning use cases:
- Just-in-time compute management – TPI automatically provisions and de-provisions compute resources once an experiment is finished running, helping to reduce costs by up to 90%.
- Automated spot instance recovery – ML teams can use spot instances to train experiments without worrying about losing all their progress if a spot instance terminates. TPI automatically migrates training jobs to a new spot instance when the existing instance terminates so that the workload can pick up where it left off.
- Consistent tooling for both data scientists and DevOps engineers – TPI delivers a tool that lets both data science and software development teams collaborate using the same language and tool. This simplifies compute management and allows for ML models to be delivered into production faster.
Browse The Complete News About Aithority: Hivestack Announces Global Alliance with Xandr for Programmatic DOOH
With TPI, data scientists only need to configure the resources they need once and are able to deploy anywhere and everywhere in minutes. Once it is configured as part of an ML model experiment pipeline, users can deploy on AWS, GCP, Azure, on-prem, or with Kubernetes.
“We chose Terraform as the de facto standard for defining the infrastructure-as-code approach,” said Dmitry Petrov, co-founder and CEO of Iterative. “TPI extends Terraform to fit with machine learning workloads and use cases. It can handle spot instance recovery and lets ML jobs continue running on another instance when one is terminated.”
Read More About Aithority News : Centage Announces Premium App for QuickBooks Online Advanced
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.