NEC Selects Supermicro GPU Systems for One of Japan’s Largest Supercomputers for Advanced AI Research
Supermicro, a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing that NEC Corporation has selected over 116 Supermicro GPU servers that contain dual socket 3rd Gen Intel Xeon Scalable processors and each with eight NVIDIA A100 80GB GPUs. As a result, the Supermicro GPU server line can include the latest and most powerful Intel Xeon scalable processors and the most advanced AI GPUs from NVIDIA.
“Supermicro is thrilled to deliver an additional 580 PFLOPS of AI training power to its worldwide AI installations,” said Charles Liang, president, and CEO, Supermicro. “Supermicro GPU servers have been installed at NEC Corporation and are used to conduct state-of-the-art AI research. Our servers are designed for the most demanding AI workloads using the highest-performing CPUs and GPUs. We continue to work with leading customers worldwide to achieve their business objectives faster and more efficiently with our advanced rack-scale server solutions.”
Recommended AI News: MobileFuse Achieves Carbon Negative Status, Commits to Ongoing Reductions
The NEC supercomputer for AI research is used to maintain and strengthen its reputation in AI development. This AI supercomputer will become the largest in the industry in Japan, with a performance of more than 580 PFLOPS. Hundreds of AI researchers are already using part of the system at NEC. Adding the 580 PFLOPS system from Supermicro will make it Japan’s leading research and development environment dedicated to AI, which will help accelerate the development of more advanced AI algorithms. This solution is based on the NVIDIA HGX™ AI supercomputing platform that includes NVIDIA NVLink and the NVIDIA NVSwitch.
“NEC Corporation is leading the AI revolution by conducting leading-edge research using Supermicro GPU servers,” said Takatoshi Kitano, Senior AI Platform Architect at NEC. “The Supermicro GPU server is highly extensible and customizable, allowing the system to flexibly evolve as future AI research develops. We continue to work with Supermicro closely to advance our AI research.”
Specifically, the new systems from Supermicro include the SYS-420GP-TNAR server, which is optimized for large-scale distributed learning applications. Each server contains two Intel Xeon Platinum 8358 processors (32 core, 2.6 GHz), 1TB of memory, and eight NVIDIA A100 80GB Tensor Core GPUs. Local storage in each server consists of a 1.9TB NVMe SSD and four 7.6TB NVMe SSDs for data. The interconnect between servers consists of five NVIDIA ConnectX-6 Single Port interfaces and a single NVIDIA ConnectX-6 dual-port interface for storage.
Recommended AI News: Calumino Announces Series A Funding Round to Scale First-of-its-Kind Intelligent Thermal Sensing Platform
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.