Anyscale Unveils Ray 2.0 and Anyscale Innovations at Ray Summit 2022; Adds an Additional $99Million Funding from Existing Investors Addition, Intel Capital, and Foundation Capital
Anyscale, the company behind Ray, the unified framework for scalable computing,announced Ray 2.0 and the enterprise-ready capabilities and roadmap for Anyscale’s managed Ray platform at the Ray Summit. This year’s Summit features dozens of organizations scaling their AI initiatives with Ray including Uber, IBM, Meta, Riot Games, Instacart and more. In addition, Anyscale also announced today that it has secured another $99 million in Series C funding co-led by existing investors Addition and Intel Capital with participation from Foundation Capital. The funding follows the $100 million Series C funding round announced in December 2021.
Recommended AI News: RingCentral Announces Innovations to Make Hybrid Work Simple
The accelerated adoption of Ray is driven by the growing gap between the demands of machine learning (ML) applications and the limitations of a single processor or a single server. In the past few years alone, the computational requirements for ML training have been growing between 10 to 35 times every 18 months. This fact, combined with the engineering complexity of scaling these workloads, has led to over 85 percent of AI projects failing in production. Ray tackles the cost and complexity of scaling head-on and is the fastest growing open-source, unified distributed framework for scaling AI and Python applications.
“Ray and the Anyscale platform have made tremendous progress in advancing the scaling of machine learning and Python workloads,” said Anyscale CEO, Robert Nishihara. “Thousands of organizations already rely on Ray for their AI initiatives and dozens of them are showcasing their use cases and breakthroughs at this year’s Ray Summit. With new innovations in Ray 2.0 and Anyscale’s platform, we are further accelerating our efforts to ensure Ray is easily accessible to any developer and to organizations of all sizes.”
“Ray is quickly becoming the industry standard for scaling machine learning, Python and AI workloads, solving one of the biggest obstacles today to realizing AI’s full potential,” said Nick Washburn, Senior Managing Director at Intel Capital. “The rapid adoption of Ray positions Anyscale to unlock the growing market opportunity in AI.”
Recommended AI News: Comscore and Standard Media Index Launch First Effective Cost-per-Thousand (eCPM) Metric for National Linear Television Ad Spend
the first day of Ray Summit, the company unveiled major new developments in Ray 2.0. Highlights include:
- Ray AI Runtime (AIR)
- Ray AIR adds a unified runtime layer for ML applications and services.
- The runtime simplifies ML application development, increases developer velocity, and is cloud and framework agnostic, enabling interoperability with popular frameworks such as Tensorflow, PyTorch, Hugging Face and more.
- Ray AIR overcomes the hurdles of proprietary cloud and vendor solutions and is available now in beta.
- KubeRay
- KubeRay is a joint collaboration between Anyscale, Microsoft and Bytedance.
- The KubeRay toolkit enhances the robust execution of Ray applications on Kubernetes, an open-source system for managing containerized applications across multiple hosts.
- Integration with the ML Ecosystem
- Seamless interoperability with ML ecosystem tools like Weights & Biases and Hugging Face Transformers allows developers to combine the advantages of Ray for scaling with the broader ML ecosystem including ML libraries, data platforms, and MLOps platforms.
- Ease of Development & Increased Scalability
- New Ray 2.0 optimizations support petabyte-scale, compute-intensive workloads.
- New observability tooling eases development and debugging, enabling faster iteration cycles for developers and faster time-to-market.
Anyscale today also introduced its enterprise-ready Ray platform, a unified compute platform that makes it easy to build, manage, and bring to market scalable AI and Python applications using Ray. Anyscale highlights include:
- Next Generation ML Workspace
- The Anyscale ML Workspace enables users to develop and scale ML applications from prototype to production and back for debugging. This experience makes moving AI applications and ML pipelines to production easier and reduces context switching.
- The Anyscale ML Workspace eases deployments by integrating with best-in-class ML tools like Weights & Biases and Arize AI. This allows users to scale compute for machine learning while leveraging the best MLOps tools.
- New Enterprise Platform
- The Anyscale Enterprise Platform gives IT and security teams secure cluster connectivity and customer-managed virtual private clouds.
- The Enterprise Platform also offers users activity auditing, operational monitoring and cost management capabilities to enable organizations to track spending. This functionality is currently in preview.
Recommended AI News: Wirelesscar Announces AI-Research Project for Sustainable Mobility
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.