Supermicro COMPUTEX Keynote Unveils Company’s Accelerate Everything Strategy for Product Innovation, Manufacturing Scale, and Green Technology
Super Micro Computer, a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, continues to offer IT solutions for decreasing the environmental impact of modern data center. Supermicro is advancing technology in critical areas such as product design, green computing, manufacturing, and rack scale integration which enables organizations to become productive and reduce energy consumption quickly.
Latest Insights: Why Only AI and Data Analytics Can Stop Financial Criminals
“Our Green Computing focus enables Supermicro to design and manufacture state-of-the-art servers and storage systems with the latest CPU and GPU technologies from NVIDIA, Intel, and AMD that reduce power consumption,” said Charles Liang, president and CEO of Supermicro. “Our innovative rack scale liquid cooling option enables organizations to reduce data center power usage expenses by up to 40%. Our popular GPU Servers with the NVIDIA HGX H100 8-GPU server continue to be in demand for AI workloads. We are expanding our solution offerings with innovative servers that use the NVIDIA Grace CPU Superchip and are working closely with NVIDIA to bring energy-efficient servers to market for AI and other industries. Worldwide our manufacturing capacity is 4,000 racks today and more than 5,000 later this year.”
Supermicro has the most comprehensive portfolio to support AI workloads and other verticals. These innovative systems include single and dual-socket rack mount systems based on 4th Gen Intel Xeon Scalable processors and 4th Gen AMD EPYC processors in 1U, 2U, 4U, 5U, and 8U form factor supporting 1-10 GPUs as well the density-optimized SuperBlade systems supporting 20 NVIDIA H100 GPUs in an 8U enclosure, and SuperEdge systems designed for IoT and edge environments. The newly announced E3.S Petascale storage systems offer significant performance, capacity, throughput, and endurance when training on very large AI datasets while keeping excellent power efficiencies.
A new product family built on the NVIDIA Grace CPU Superchip will be available soon. These new servers will each contain 144 cores with dual CPUs joined by a 900GB/sec connection, allowing for highly responsive AI applications and those requiring extremely low latency responses. With the CPU running at 500W TDP, this system will reduce energy consumption for cloud-native workloads and the next generation of AI applications.
With AI applications proliferating, the demand for high end AI designed servers is increasing, which brings new challenges for system providers to incorporate the latest CPUs and GPUs. The most advanced Supermicro GPU server incorporates dual CPUs and up to eight NVIDIA HGX H100 GPUs, which are available with a liquid cooled option, reducing OPEX.
“NVIDIA is closely working with Supermicro to quickly bring innovations to new server designs to meet the needs of the most demanding customers,” said Ian Buck, vice president of hyperscale and HPC at NVIDIA. “With Supermicro’s servers powered by Grace CPU Superchips shipping shortly and H100 GPUs gaining traction around the world, we’re working together to bring AI to a wide range of markets and applications.”
To reduce the TCO for customers, Supermicro is endorsing the new NVIDIA MGX reference architecture that will result in over a hundred server configurations for a range of AI, HPC, and Omniverse applications. This modular reference architecture includes CPUs, GPUs, and DPUs and is designed for multiple generations of processors.
AiThority: The 3 Building Blocks to Make AI Accessible
Supermicro will also incorporate the latest NVIDIA networking technology, the NVIDIA Spectrum-X networking platform in a broad range of solutions. The platform is the first designed specifically to improve the performance and efficiency of Ethernet-based AI clouds. Spectrum-X is built on network innovations powered by the tight coupling of the NVIDIA Spectrum-4 Ethernet switch plus NVIDIA BlueField-3 data processing unit (DPU). This breakthrough technology achieves 1.7X better overall AI performance and energy efficiency, along with consistent, predictable performance in multi-tenant environments.
Green computing is critical for today’s data centers, which consume 1 – 1.5% of worldwide electricity demand. Supermicro’s complete rack scale liquid cooling solution significantly reduces the need for traditional cooling methods. With redundant and hot-swappable power supplies and pumps, entire racks of high-performing AI and HPC optimized servers can be cooled efficiently even during a power supply or pump failure. This solution also uses custom-designed cold plates for both CPUs and GPUs, which are more efficient at removing heat than traditional designs. Up to $10B in energy costs can be saved if data centers lower their PUE closer to 1.0 with Supermicro technology and do not have to build 30 fossil fuel power plants.
Read More: How ChatGPT Will Transform Customer Service
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.