Supermicro Introduces NVIDIA GPU Server Test Drive Program
– Supermicro Introduces NVIDIA GPU Server Test Drive Program with Leading Channel Partners to Deliver Workload Qualification on Remote Supermicro Servers
S uper Micro Computer, Inc. , a global leader in enterprise computing, storage and networking solutions and green computing technology, revealed details of a new program GPU test-drive. Called STEP ( S upermicro T is drive E ngagement With P artners), the program allows customers to experience remotely 2U HGX A100 4-GPU or 4U HGX A100 8-GPU Supermicro NVIDIA System 3 rd Generation NVLink.
“Supermicro’s collaboration with NVIDIA on the GPU test drive program offers through channel partners a unique opportunity to test workloads on remote Supermicro servers using NVIDIA’s HGX A100 platforms,” said Don Clegg , SVP of Worldwide Sales, Supermicro . “This program will showcase the ability of these servers to support unique applications and accelerate solutions to the market process.”
Recommended AI News: Neustar Announces Additions to Executive Leadership Team
Customers can access the program through the Supermicro STEP home page with links to participating partners where customers can start the registration process. Customers can then connect directly to remote NVIDIA HGX A100 Supermicro platforms to test and qualify their advanced workloads.
“The NVIDIA HGX AI supercomputing platform is purpose-driven for the highest performance in simulation, data analysis, and AI applications,” explained Paresh Kharya , Senior Director of Product Management and Marketing at NVIDIA. “Supermicro’s decision to build its STEP program on the foundation of NVIDIA’s HGX technology will give customers access to the leading platform that can tackle the most complex problems and transform the global research community.”
Recommended AI News: Absolute Software Helps Customers Secure Remote Access and Communication
Supermicro’s high-density 2U and 4U servers include NVIDIA HGX A100 4-GPU and 8-GPU motherboards. Supermicro’s Advanced I / O Module (AIOM) form factor enhances networking communication with high flexibility. The AIOM can be combined with the latest high-speed, low-latency PCI-E 4.0 networking and storage devices supporting NVIDIA GPUDirect ®RDMA and GPUDirect Storage with NVMe over Fabrics (NVMe-oF) on NVIDIA Mellanox® InfiniBand powering the scalable system Multi-GPU with a continuous stream of data flow without bottlenecks.
Recommended AI News: LTE and 5G Wireless Standards Continue to Evolve with Each New 3GPP Release