Marvell Announces Cloud-Optimized 51.2 Tbps Networking Platform for AI/ML and Data Center Networks
Marvell Technology, a leader in data infrastructure semiconductor solutions, announced a cloud-optimized 51.2 Tbps networking platform to relieve bottlenecks in artificial intelligence (AI)/machine learning (ML) and cloud data center networks. The new platform, which quadruples the bandwidth of widely deployed 12.8 Tbps networking solutions, is comprised of the ultra-low latency Marvell Teralynx 10 51.2 Tbps switch chip, and the industry’s first PAM4 1.6 Tbps electro-optics platform, Marvell Nova™. This technology allows cloud data center operators to reduce time-in-networking, maximize compute utilization and meet the growing bandwidth demands for AI and ML.
Recommended AI News: Nasuni Acquires DBM Cloud Systems’ Data Mobility Technology
To meet exploding bandwidth requirements, operators need to upgrade to higher performance networking solutions, accelerating time to market while reducing cost and power per bit. Marvell’s cloud-optimized Teralynx 10 and Nova networking platform ensures interoperability between switch and optics, thereby reducing the burden of validation and interoperability testing for customers and accelerating the deployment of these next-generation technologies. The Nova 1.6 Tbps platform enables 51.2 Tbps switching in 1RU to improve bandwidth density in the cloud data center.
“Data center operators are challenged to meet the networking demands that applications such as artificial intelligence and machine learning are driving in the cloud,” said Nariman Yousefi, executive vice president, Automotive, Coherent DSP and Switch Group at Marvell. “To address this demand, leading operators are planning to upgrade directly from 12.8 Tbps to 51.2 Tbps. Utilizing the industry’s lowest latency programmable switch, Teralynx 10, with the industry’s first 1.6 Tbps optical platform, Nova, offers data center operators a cloud-optimized platform to scale and address the growing demands of AI/ML applications.”
“With bandwidth demand growing at more than 50% per year, cloud data center operators are locked in a never-ending effort to dramatically increase the performance and capabilities of their operations while keeping equipment costs, rack space and power to a minimum. Basing infrastructure around 51.2 Tbps switches and 200 Gbps per lambda optical PAM4-based modules will become the gold standard for the next era of networking,” said Alan Weckel, co-founder of 650 Group. “Marvell is paving a path forward that will benefit both clouds and their customers.”
Recommended AI News: Pure Storage and Snowflake Deliver Increased Data Accessibility
Teralynx 10 Ultra-Low Latency Programmable 51.2 Tbps Switch Chip
Based on a proven architecture, Teralynx 10 is a programmable 51.2 Tbps switch chip designed to handle high-bandwidth workloads. The Teralynx architecture has demonstrated a 1.7x latency advantage, enabling operators to reduce time spent in networking and speed workload processing. In addition, Teralynx 10 and Nova utilize the industry’s best-in-class 112G SerDes IP, enabling low-cost, low-power system designs. Key features of Teralynx 10 include:
- Up to 512 SerDes lanes supporting 25 Gbps, 50 Gbps and 100 Gbps I/O speeds to support a wide range of switch systems and connectivity
- Congestion-aware routing to minimize network bottlenecks and congestion
- Permutable flex-forwarding to enable operators to program new packet forwarding protocols as networks evolve
- Teralynx® Flashlight™ telemetry to deliver comprehensive and advanced capabilities including support for P4 in-band network telemetry
Nova 1.6 Tbps PAM4 Electro-Optics Platform
Powered by a groundbreaking 200 Gbps/lambda optical DSP, Nova doubles the optical bandwidth compared to current solutions while reducing power and cost per bit by 30%. Today’s highest-performance 800 Gbps optical modules are based on 100 Gbps per lambda optical bandwidth, which requires 64 modules to move data to and from a 51.2 Tbps-based switch system. With Nova, the number of optical modules can be reduced by half to 32, each running at 1.6 Tbps. In addition, the number of optical components per optical module is reduced by 50%, increasing module reliability while reducing manufacturing complexity and cost.
Recommended AI News: Dell Technologies Storage Software Innovations Power New Levels of Automation, Security and Multi-Cloud Flexibility
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.