Cirrascale Cloud Services Broadens Deep Learning Cloud Offerings With World’s Most Powerful GPU For AI Supercomputing
Company adds the NVIDIA A100 80GB and A30 GPUs to its burgeoning deep learning cloud for development, training, and inference workloads.
Recommended AI News: Etherlite Is Giving ETL Tokens to Every ETH Wallet Holder; Biggest Airdrop Ever
“Model sizes and datasets in general are growing fast and our customers are searching for the best solutions to increase overall performance and memory bandwidth to tackle their workloads in record time,” said Mike LaPan, vice president, Cirrascale Cloud Services. “The NVIDIA A100 80GB Tensor Core GPU delivers this and more. Along with the new A30 Tensor Core GPU with 24GB HBM2 memory, these GPUs enable today’s elastic data center and deliver maximum value for enterprises.”
The NVIDIA A100 80GB Tensor Core GPU introduces groundbreaking features to optimize inference workloads. It accelerates a full range of precision, from FP32 to INT4. Multi-Instance GPU (MIG) technology enables up to 7 instances with up to 10GB of memory to operate simultaneously on a single A100 for optimal utilization of compute resources. Structural sparsity support delivers up to 2X more performance on top of the A100 GPU’s other inference performance gains. A100 provides up to 20x higher performance over the NVIDIA Volta® and on modern conversational AI models like BERT Large, A100 accelerates inference throughput by 100x over CPUs.
Also available through Cirrascale Cloud Services is the NVIDIA A30 Tensor Core GPU, which delivers versatile performance supporting a broad range of AI inference and mainstream enterprise compute workloads, such as recommender systems, conversational AI and computer vision. The A30 also supports MIG technology, delivering superior price/performance with up to 4 instances containing 6GB of memory, perfectly suited to handle entry-level applications. Cirrascale’s accelerated cloud server solutions with NVIDIA A30 GPUs provide the needed compute power — along with large HBM2 memory, 933GB/sec of memory bandwidth, and scalability with NVIDIA NVLink® interconnect technology — to tackle massive datasets and turn them into valuable insights.
“Customers deploying the world’s most powerful GPUs within Cirrascale Cloud Services can accelerate their compute-intensive machine learning and AI workflows better than ever,” said Paresh Kharya, senior director of Product Management, Data Center Computing at NVIDIA.
Recommended AI News: 3i Infotech Charters a New Growth Path Through a Digital & Cloud-First Focus
Comments are closed.