[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Marvell Expands Custom Compute Platform with UALink Scale-up Solution for AI Accelerated Infrastructure

Essential technology, done right (PRNewsfoto/Marvell Technology Group Ltd.)

Marvell Technology, a leader in data infrastructure semiconductor solutions, today announced its custom Ultra Accelerator Link (UALink) scale-up offering. As part of the Marvell comprehensive IP portfolio for custom AI compute platforms, the new custom UALink solution delivers open standards-based scale-up interconnect with high compute utilization with low latency—enabling greater efficiency and scalability between AI accelerators and switches in next-generation accelerated infrastructure.

Also Read: AiThority Interview with Nicole Janssen, Co-Founder and Co-CEO of AltaML

The Marvell custom UALink scale-up solution features a comprehensive set of interoperable IPs, including:

  • Best in class 224G SerDes and UALink Physical Layer IP
  • Configurable UALink Controller IP
  • Scalable low-latency Switch Core and Fabric IP
  • Advanced packaging options including co-packaged copper and co-packaged optics
Related Posts
1 of 42,009

The custom UALink solution enables customers to deliver scale-up interconnects for hundreds or thousands of AI accelerators in a scale-up deployment. Paired with Marvell custom silicon capabilities, compute vendors can build solutions including custom accelerators with UALink controllers and custom switches. The combination of Marvell advanced packaging technology and the custom UALink architecture enables optimal performance for rack-scale AI.

Hyperscalers are increasingly challenged by the need to scale AI infrastructure while ensuring high performance. The Marvell custom UALink offering addresses these challenges with an open-standards based toolkit that enables direct, low-latency communication between accelerators and supports flexible, scalable switch topologies. Marvell empowers hyperscalers to build next-generation AI infrastructure with the performance, interoperability and efficiency required to support AI workloads.

Also Read: Why multimodal AI is taking over communication

“We are pleased to introduce our new custom UALink offering to enable the next generation of AI scale-up systems,” said Nick Kucharewski, senior vice president and general manager, Cloud Platform Business Unit at Marvell. “This addition to our custom portfolio enables customers with flexibility to optimize their AI infrastructure with standards-based scale-up switch and interconnect technology.”

“We are excited to see UALink custom solutions from Marvell, which are essential to the future of AI,” said Forrest Norrod, executive vice president and general manager, Data Center Solutions Group, AMD. “At our core, we’re committed to building large-scale AI and high-performance computing solutions grounded in open standards, efficiency and strong ecosystem support. We look forward to continued collaboration within the open UALink ecosystem to advance scale-up networks.”

[To share your insights with us, please write to psen@itechseries.com]

Comments are closed.