Inspur Information Announces Full Support for the NVIDIA AI Platform for Inference Processing
Exceptional AI server performance is delivered with NVIDIA A100, A30, and A2 Tensor Core GPUs.
Inspur Information, a leading IT infrastructure solutions provider, announced at NVIDIA GTC that its AI and Edge inference servers will support NVIDIA A100, A30, and the newly announced A2 Tensor Core GPUs throughout its entire inference server portfolio.
Top AiThority.com Insights: Zscaler Extends Fast, Seamless Digital Experience Monitoring to Unified Collaboration Applications
As the demand for AI inference continues to grow and diversify, Inspur Information has launched a comprehensive inference product line of NVIDIA-Certified Systems built for applications from data centers to edge computing, providing high performance for users across various application scenarios. Inspur’s NVIDIA-Certified Systems are ideal for running the NVIDIA AI Enterprise software suite, which deploys and manages AI workloads on VMware vSphere.
For data centers, NF5468M6 is an intelligent elastic architecture AI server, featuring 4U with 8x NVIDIA A100 or A30 GPUs and 2x Intel 3rd Gen Intel Xeon Scalable processors. It has the unique function of automatic switching of three topologies — balance, common and cascade — to flexibly meet the needs of various AI applications, including deep learning training, language processing, AI inference, massive video streaming and more. It provides ultra-flexibility for AI workloads.
NF5468A5 is an integrated, efficient AI server, featuring 4U with 8x NVIDIA A100 or A30 GPUs and 2x AMD Rome/Milan CPUs. It has a high performance architecture, with a CPU-to-GPU non-blocking design that provides superior commutation efficiency and a much smaller P2P communication delay. It is also optimized for conversational AI, intelligent search and high-frequency trading scenarios.
Read More About AI News : NFTs are More Than Just aPassing Fancy.They’re a Long Term Solution for Huge Numbers of Developers and Gamers to Connect
NF5280M6 is a reliable and flexible AI server, featuring 2U with 4x NVIDIA A100 or A30 GPUs or 8x NVIDIA A2 GPUs and 2x Intel 3rd Gen Intel Xeon Scalable processors. The NF5280M6 can operate stably in a variety of AI application scenarios, covering small and medium-scale AI training and high-density edge inference.
In edge computing, NE5260M5 is an open computing standard edge server with NVIDIA A100, A30 and now A2 Tensor Core GPUs and two Intel CPUs. With a 430mm chassis, it can adapt to unusual spaces and harsh working environments, including high temperatures and humidity. In the recent MLPerf Inference V1.1 results, NE5260M5 ranked first in four tasks in the Edge category of the Closed Division. NE5260M5 has been implemented in a variety of edge AI inference scenarios, such as smart campus, smart shopping mall, smart community and smart substation. It provides diverse computing power support for different AI edge applications.
Get In-Depth Insights of AI : Wazoku Integrates With Microsoft Teams to Enhance Enterprise Innovation
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.