Outerbounds Unleashes the Power of Custom Generative AI and LLM Solutions for All Enterprises
-
Outerbounds integrates Ray with Metaflow—creating one open-source, human-centric, machine learning (ML) framework that can handle massive-scale AI workloads.
-
Outerbounds removes the biggest obstacles to scaling custom foundation models—unlocking boundless possibilities to create leading AI-powered companies across any industry.
Outerbounds, a modern, human-centric ML infrastructure stack that powers open source, Metaflow announced it has significantly expanded its ability to handle massive-scale AI workloads and has made large amounts of compute capacity easily accessible and cost-effective for its users. As a result, Outerbounds offers organizations the ability to build large-scale custom generative AI and LLM solutions from prototype to production. This advancement unlocks boundless possibilities for enterprises, regardless of their size, to customize foundation models like GPT to meet their unique needs.In recent years, the computing demands of modern businesses have grown substantially in both scale and complexity. This has been accentuated by the emergence of Generative AI. Forward-looking companies that customize foundation models often grapple with three primary obstacles:
Recommended : 5 Unintelligent AI Strategies to Ditch Immediately
- Development of unique, customer-centric AI-powered product experiences beyond off-the-shelf solutions
- Affordable access to extensive computing power
- Developer-friendly integrations with existing software stacks
Development of unique customer-centric AI-powered product experiences beyond off-the-shelf solutions
Companies that develop custom ML and AI applications have massive-scale compute needs that require dedicated training infrastructure. With the help of engineers on Autodesk’s Machine Learning Platform team, Outerbounds was able to integrate Ray, an open-source distributed computing framework, with Metaflow to help address this obstacle. The integration allows Outerbounds to enhance the compute layer of Metaflow and expand its ability to handle massive-scale AI workloads, better serving the needs of ML and data teams.
“By making distributed training and high-performance computing more readily available to the ML community, we open the doors to more innovative AI and ML applications,” said Rex Lam, Director of Engineering – Machine Learning at Autodesk. “We’re excited to have contributed to Outerbounds’ enhancement of Metaflow, as this gives teams the tools they need to move faster and more efficiently.”
Affordable access to extensive computing power
Obtaining cost-effective computing power, typically reliant on graphic processing units (GPUs), presents a significant challenge due to the associated costs and GPU scarcity. For example, according to an analyst at SemiAnalysis, in April 2023, it cost OpenAI nearly $700,000 to run ChatGPT for one day using almost 29,000 GPUs.
For smaller companies, Outerbounds makes GPUs easily accessible and cost-effective to all users of the platform. In addition to supporting GPUs offered by AWS, GCP, and Azure, the company has collaborated with CoreWeave, a cloud provider that offers GPU resources that outperform large public clouds at a fraction of the cost.
Recommended : Transforming Banking with RPA: Towards a Fully Connected Enterprise
Developer-friendly integrations with existing software stacks
The Outerbounds founders developed Metaflow in 2017 at Netflix, and are competent in knowing how to make tech stacks developer-friendly. With Outerbounds’ latest enhancements, any size company can use its single, developer-friendly, open-source API to build modern AI and ML applications, and execute them flexibly on hyperscale clouds, on-prem hardware, or on specialized hardware providers.
“We have been helping companies integrate ML into their products and business processes long before the current Gen AI boom,” said Ville Tuulos, CEO and Co-founder of Outerbounds. “Our team’s deep and broad experience in the industry, makes us uniquely attuned to the diverse needs of our customers. Lately, compute capacity has been a bottleneck for so many organizations wanting to build custom AI-powered experiences in terms of ease of access, productivity, and cost, and we are thrilled to be able to remove these obstacles and help unlock their full potential.”
Recommended : Regulators are Clear: Communications Compliance a Must
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.