Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Supermicro Introduces a Rack Scale Total Solution for AI Storage to Accelerate Data Pipelines

Turn-Key Data Storage Solution for Large Scale AI Training and Inference – Hundreds of Petabytes in a Multi-tier Solution Supports the Massive Data Capacity Required and High-Performance Data Bandwidth Necessary for Scalable AI Workloads

Supermicro, Inc., a Total IT Solution Manufacturer for AI, Cloud, Storage, and 5G/Edge, is launching full stack optimized storage solution for AI and ML data pipelines from data collection to high performance data delivery. This new solution maximizes AI time-to-value by keeping GPU data pipelines fully saturated. For AI training, massive amounts of raw data at petascale capacities can be collected, transformed, and loaded into an organization’s AI workflow pipeline. This multi-tiered Supermicro solution has been proven to deliver multi-petabyte data for AIOps and MLOps in production environments. The entire multi-rack scale solution from Supermicro is designed to reduce implementation risks, enable organizations to train models faster, and quickly use the resulting data for AI inference.

Turn-Key Data Storage Solution for Large Scale AI Training and Inference
Turn-Key Data Storage Solution for Large Scale AI Training and Inference

“With 20 PB per rack of high-performance flash storage driving four application-optimized NVIDIA HGX H100 8-GPU based air-cooled servers or eight NVIDIA HGX H100 8-GPU based liquid-cooled servers, customers can accelerate their AI and ML applications running at rack scale,” said Charles Liang, president and CEO of Supermicro. “This solution can deliver 270 GB/s of read throughput and 3.9 million IOPS per storage cluster as a minimum deployment and can easily scale up to hundreds of petabytes. Using the latest Supermicro systems with PCIe 5.0 and E3.S storage devices and WEKA Data Platform software, users will see significant increases in the performance of AI applications with this field-tested rack scale solution. Our new storage solution for AI training enables customers to maximize the usage of our most advanced rack scale solutions of GPU servers, reducing their TCO and increasing AI performance.”

Recommended AI News: LiveRamp Achieves Digital Advertising Alliance AMI Certification for Sustainable Marketing Solutions

Petabytes of unstructured data used in large-scale AI training processing must be available to the GPU servers with low latencies and high bandwidth to keep the GPUs productive. Supermicro’s extensive portfolio of Intel and AMD based storage servers is a crucial element of the AI pipeline. These include the Supermicro Petascale All-Flash storage servers, which have a capacity of 983.04* TB per server of NVMe Gen 5 flash capacity and deliver up to 230 GB/s of read bandwidth and 30 million IOPS. This solution also includes the Supermicro SuperServer 90 drive bay storage servers for the capacity object tier. This complete and tested solution is available worldwide for customers in ML, GenAI, and other computationally complex workloads.

Related Posts
1 of 40,585

Recommended AI News: SmartBear Advances GenAI-Powered Development with Reflect Acquisition

The new storage solution consists of:

  • All-Flash tier – Supermicro Petascale Storage Servers
  • Application tier – Supermicro 8U GPU Servers: AS -8125GS-TNHR and SYS-821GE-TNHR
  • Object tier – Supermicro 90 drive bay 4U SuperStorage Server running Quantum ActiveScale object storage
  • Software: WEKA Data Platform and Quantum ActiveScale object storage
  • Switches: Supermicro InfiniBand and Ethernet Switches

“The high performance and large flash capacity of Supermicro’s All-Flash Petascale Storage Servers perfectly complement WEKA’s AI-native data platform software. Together, they provide the unparalleled speed, scale, and simplicity demanded by today’s enterprise AI customers,” said Jonathan Martin, president at WEKA.

Recommended AI News: ExtraHop Expands CrowdStrike Falcon LogScale Integration

[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]

Comments are closed.