Arrcus Unveils Groundbreaking ACE-AI Networking Solution at MWC Las Vegas
Empowering Distributed AI with industry’s most flexible, scalable, lossless, resilient networking platform
Arrcus, the hyperscale networking software company and a leader in core, edge, and multi-cloud routing and switching infrastructure, is proud to introduce its trailblazing networking solution, Arrcus Connected Edge for AI (ACE-AI), designed to revolutionize the networking industry for AI/ML workloads.
Read More about Interview : AiThority Interview with Keri Olson, VP at IBM IT Automation
“AL/ML workloads, Large Language Models (LLM) and compute-intensive applications like GenAI need to be delivered in a distributed fashion, to enable pooling of scarce, expensive compute resources as well as ensure low latency at the point of consumption”
In the heart of today’s digitally driven world, a monumental transformation is underway. Generative Artificial Intelligence (GenAI) has emerged as a powerful force, reshaping entire industries and redefining our digital landscape. At the epicenter of this transformative wave is the exponential growth of datacenter network traffic, driven by GenAI applications. AI/ML workloads will be increasingly distributed – at the Edge, in colos, telco PoPs/datacenters and public clouds. This growth underscores the urgent need for networks to evolve to become distributed, open, lossless and predictable to cater to this revolutionary paradigm shift.
“AI networking bandwidth is going to grow over 100% Y/Y in the second half of 2023 and throughout 2024 based on the remarkable growth in vendor revenue associated with AI/ML, and this class of networking bandwidth growth is nearly three times that of traditional data center networking,” said Alan Weckel, Founder and Technology Analyst at 650 Group. “We see AI/ML as a major catalyst for the growth of Data Center switching over the next five years and is likely to be distributed in nature. Arrcus’ adaptable ACE-AI platform effectively addresses these demands with its open and flexible network fabric, designed to unify the distributed AI/ML workloads wherever they may reside.”
Arrcus’ innovative ACE-AI networking solution based on ArcOS delivers a modern, unified network fabric for optimizing GPU and other distributed compute resources with maximum performance for AI/ML workloads. Ethernet as the underlying technology is well suited to address the needs with its inherent benefits for scalability, reliability, flexibility and low latency and ACE-AI builds on Ethernet to seamlessly weave together the entire network infrastructure, spanning from the edge to core to multi-cloud, encompassing switching and routing. Communication Service Providers (CSP), Enterprises and Datacenter operators can now harness the potential of 5G and GenAI by pooling compute resources wherever they may reside across the network to drive business outcomes.
“AL/ML workloads, Large Language Models (LLM) and compute-intensive applications like GenAI need to be delivered in a distributed fashion, to enable pooling of scarce, expensive compute resources as well as ensure low latency at the point of consumption,” said Shekar Ayyar, Chairman and CEO, Arrcus. “With Arrcus ACE-AI, Enterprises, CSPs and Hyperscalers now have the ability to transition from the legacy, single vendor networks, to a more modern, scalable, and software-defined network with lower TCO in support of their AI expansion plans.”
Browse more about Interview Insights: AiThority Interview with Gijs van de Nieuwegiessen, VP of Automation at Khoros
For next-generation datacenters, ACE-AI enables traditional CLOS and Virtualized Distributed Routing (VDR) architectures, with massive scale and performance to provide lossless, predictable connectivity for GPU clusters with high resiliency, availability and visibility. Features like Priority Flow Control (PFC), intelligent congestion detection and buffering at ingress points to prevent packet drops, ensure lower Job Completion Times (JCT) and tail latency. In the always-on world of GenAI, network high availability is crucial and is supported with features like Hitless Upgrade, reducing software maintenance upgrade times to under 20ms.
“High-performing, reliable and low-latency networks that are energy efficient are fundamental to bring the AI/ML disruption to reality. Ethernet with its standards-based approach is perfectly suited to deliver this critical connectivity to the distributed AI paradigm,” said Ram Velaga, senior vice president and general manager, Core Switching Group, Broadcom Inc. “We share Arrcus’ vision and are excited to collaborate with them to create a network fabric that is open, programmatic and first rate performance.”
To leverage 5G to deliver GenAI applications at the edge, ACE-AI also benefits from innovations like SRv6 Mobile User Plane (MUP) to enable the automated delivery of network slicing simply, efficiently and cost effectively. Network visibility is paramount for GenAI workloads, and ArcIQ is a modern network visibility and analytics platform that provides networking professionals with real-time, deep views of the networks and devices with actionable insights for proactive incident management and faster troubleshooting.
Open networks serve as the cornerstone of adaptability and rapid innovation, and Arrcus supports a wide range of energy conserving merchant silicon and hardware options from multiple ODMs, ranging in speeds from 1G to 400G, across shallow and deep buffer switches, granting customers unprecedented choices.
“The AI revolution is underway and demanding a comprehensive reevaluation of network infrastructure development to effectively address requirements of Distributed AI,” said Heimdall Siao, President of Edgecore Networks. “We are enthusiastic about our collaboration with Arrcus to build the next-generation infrastructure combining Edgecore’s Open Networking platform with Arrcus versatile ACE AI solution.”
“To adequately address high performance, low latency needs of applications like GenAI, combined with 5G transport it is critical to transition from legacy networking to an open, disaggregated stack that leverages hardware platforms built on the latest merchant silicon combined with scalable, programmable networking stack,” said Vincent Ho, CEO of UfiSpace. “UfiSpace and Arrcus deliver a highly innovative solution to address the 5G and AI revolution.”
As GenAI reshapes industries, Arrcus delivers networks to match this dynamism by providing open, lossless, predictable, and distributed networks to lay the foundation for datacenters to flourish in this transformative era.
Latest Interview Insights : AiThority Interview with Matthew Tillman, Co-Founder and CEO at OpenEnvoy
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.