Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

CoreWeave First to Market with NVIDIA H200 Tensor Core GPUs, Ushering in a New Era of AI Infrastructure Performance

CoreWeave’s innovative Mission Control platform delivers performant AI infrastructure with high system reliability and resilience, enabling customers to use NVIDIA H200 GPUs at scale to accelerate the development of their generative AI  applications

CoreWeave Joins NVIDIA Cloud Service Provider Program

CoreWeave, the AI Hyperscaler, announced that it is the first cloud provider to bring NVIDIA H200 Tensor Core GPUs to market. CoreWeave has a proven track record of being first to market with large-scale AI infrastructure, and was among the first to deliver a large-scale NVIDIA H100 Tensor Core GPU cluster interconnected with NVIDIA Quantum-2 InfiniBand networking, which broke MLPerf training records in June 2023. Today, CoreWeave’s infrastructure services are used to train some of the largest and most ambitious models from customers including Cohere, Mistral, and NovelAI.

Also Read: Humanoid Robots And Their Potential Impact On the Future of Work

The NVIDIA H200 Tensor Core GPU is designed to push the boundaries of generative AI by providing 4.8 TB/s memory bandwidth and 141 GB GPU memory capacity that helps deliver up to 1.9X higher inference performance than H100 GPUs. CoreWeave’s H200 instances combine NVIDIA H200 GPUs with Intel’s fifth-generation Xeon CPUs (Emerald Rapids) and 3200Gbps of NVIDIA Quantum-2 InfiniBand networking, and are deployed in clusters with up to 42,000 GPUs and accelerated storage solutions to deliver powerful performance and enable customers to dramatically lower their time and cost to train their GenAI models.

“CoreWeave is dedicated to pushing the boundaries of AI development and, through our long-standing collaboration with NVIDIA, is now first to market with high-performance, scalable, and resilient infrastructure with NVIDIA H200 GPUs,” said Michael Intrator, CEO and co-founder of CoreWeave. “The combination of H200 GPUs with our technology empowers customers to tackle the most complex AI models with unprecedented efficiency, and to achieve new levels of performance.”

Related Posts
1 of 41,148

CoreWeave’s Mission Control platform offers customers unmatched reliability and resiliency by managing the complexities of AI infrastructure deployment and uptime with software automation. The platform helps customers train models faster and more efficiently by using advanced system validation processes, proactive fleet health-checking, and extensive monitoring capabilities. CoreWeave’s rich suite of observability tools and services provides transparency across all the critical components of the system, empowering teams to maintain uninterrupted AI development pipelines. This translates to reduced system downtime, faster time to solution and lower total cost of ownership.

Also Listen: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies

“CoreWeave has a proven track record of deploying NVIDIA technology rapidly and efficiently, ensuring that customers have the latest cutting-edge technology to train and run large language models for generative AI,” said Ian Buck, vice president of Hyperscale and HPC at NVIDIA. “With NVLink and NVSwitch, as well as its increased memory capabilities, the H200 is designed to accelerate the most demanding AI tasks. When paired with the CoreWeave platform powered by Mission Control, the H200 provides customers with advanced AI infrastructure that will be the backbone of innovation across the industry.”

In addition to bringing the latest NVIDIA GPUs to market and advancing its portfolio of cloud services, CoreWeave is rapidly scaling its data center operations to keep up with demand for its industry-leading infrastructure services. CoreWeave has completed nine new data center builds since the beginning of 2024, with 11 more in progress. The company expects to end the year with 28 data centers globally, with an additional 10 new data centers planned in 2025.

Also Read: AiThority Interview with Anand Pashupathy, Vice President & General Manager, Security Software & Services Division, Intel

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.