DDN Takes the Next Step to Simplify AI Supercomputing Adoption
DDN, the global leader in artificial intelligence (AI) and multi-cloud data management solutions, announced compatibility with the next generation of NVIDIA DGX systems, each including eight NVIDIA H100 Tensor Core GPUs. NVIDIA DGX H100 supercomputers are the fourth generation of NVIDIA’s purpose-built AI system and designed to handle the most taxing training workloads like natural language processing and deep learning recommender models. These types of workloads require large data models and high-speed throughput to deliver breakthrough results, and pairing NVIDIA DGX systems with DDN’s A3I is a proven combination for AI centers of excellence worldwide.
DDN AI400X2 storage appliance compatibility with DGX H100 systems build on DDN’s field-proven deployments of DGX A100-based DGX BasePOD reference architectures (RAs) and DGX SuperPOD systems that have been leveraged by customers for a wide range of use cases. Offered as part of DDN’s A3I infrastructure solution for AI deployments, customers can scale to support larger workloads with multiple DGX systems. DDN also supports the latest NVIDIA Quantum-2 and Spectrum-4 400Gb/s networking technologies. Validated with NVIDIA QM9700 Quantum-2 InfiniBand and NVIDIA SN4700 Spectrum-4 400GbE switches, the systems are recommended by NVIDIA in the newest DGX BasePOD RA and DGX SuperPOD. With double the IO capabilities of the prior generation, DGX H100 systems further necessitate the use of high performance storage solutions like DDN’s AI400X2.
Latest Insights: Is Customer Experience Strategy Making or Breaking Your ‘Shopping Festival’ Sales?
“The demand for scalable AI infrastructure continues to grow, as enterprises realize the power that AI delivers to transform their business,” said Dr. James Coomer, senior vice president for products at DDN. “We see more and more organizations that are moving from assessing AI to applying AI to deliver business results. These organizations are looking for proven infrastructure that integrates into their data center in a simple and efficient manner, which is exactly what NVIDIA DGX systems with DDN storage delivers.”
In addition to these on-premises deployment options, DDN is also announcing a partnership with Lambda to deliver a scalable data solution based on NVIDIA DGX SuperPOD with over 31 DGX H100 systems. Lambda intends to use the systems to allow customers to reserve between two and 31 DGX instances backed by DDN’s parallel storage and the full 3200 Mbps GPU fabric. This hosted offering supplies rapid access to GPU-based computing without a commitment to a large data center deployment along with a simple competitive pricing structure. Lambda chose DDN as the backend storage for this project because of DDN’s established track record of successful DGX SuperPOD deployments, as well as the expertise for storage at scale that DDN brings to the table. Lambda will also be selling DGX BasePOD and DGX SuperPOD with DDN A3I storage for customers looking to establish on-site deployments.
“As organizations continue to modernize around AI, they’re experiencing explosive demand around performance and data needs,” shared David Hall, head of high performance computing, Lambda. “To address that need, Lambda, as a market leader in the deep learning infrastructure space, is bringing NVIDIA DGX systems with DDN A3I storage into our reserved cloud offering. This provides our customers with a full-service experience coupled with industry-leading performance in a matter of weeks rather than months.”
AiThority: How AI Can Improve Public Safety
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.