CoreWeave Announces NovelAI as Among the First to Have NVIDIA HGX H100 GPUs Online
CoreWeave, a specialized cloud provider built for large-scale GPU-accelerated workloads, announced general availability of instances of NVIDIA HGX H100 GPUs online. This is CoreWeave’s second NVIDIA H100 offering, following the company’s launch of H100 PCIe GPU instances in January.
AI Insights: Google Introduces PaLM API & MakerSuite; Simplifies Generative AI Development Lifecycle
“We are entirely focused on AI innovation and AI-first products. NVIDIA H100 GPUs are the most top-notch, state-of-the-art machine learning accelerators”
Anlatan, developers of NovelAI, will be among the first to deploy the latest NVIDIA H100 Tensor Core GPUs on CoreWeave, which began offering the new instances to select customers in February.
The news comes amid the global embrace of generative AI, a technology employing large language model (LLM) training that enables creative work, including the writing of scholarly papers, a stand-up comedy routine or a sonnet; the designing of artwork from a block of text; and in the case of NovelAI, composing literature.
“With generative AI becoming such a cultural phenomenon, it means a lot to be the first provider to make NVIDIA HGX H100 platforms generally available. This is a testament to our agility and efficiency in deploying infrastructure,” said Michael Intrator, CoreWeave co-founder and CEO. “We’ve been working with Anlatan, the creators of NovelAI, for more than a year, and we’re honored that they will be one of the first to deploy these cutting-edge GPUs.”
Launched in June 2021, NovelAI is a monthly subscription service for AI-assisted authorship, storytelling, text-adventure games, and virtual companionship. It also serves as a GPT-powered sandbox for creators and developers. The company’s AI algorithms, trained on actual literature, generate text based on users’ respective writing styles. They adapt to inputs in order to maintain the author’s perspective and style, making quality literature possible from anyone, regardless of ability. NovelAI blends the power of AI storytelling with the privacy of full encryption to offer limitless freedom of expression.
Read More about AI Experiences : Google Brings Generative AI Experiences to Google Workspace
NVIDIA HGX H100 AI supercomputing platforms will be a key component in Anlatan’s product development and deployment process. CoreWeave’s cluster will enable the developers to be more flexible with model design, more quickly iterate on training, and serve their models through NovelAI to millions of users every month.
“We are entirely focused on AI innovation and AI-first products. NVIDIA H100 GPUs are the most top-notch, state-of-the-art machine learning accelerators,” said Anlatan CEO Eren Doğan. “This gives us a significant competitive advantage within the machine learning industry – for a wide variety of applications ranging from model training to model inference. We have worked with CoreWeave previously and were extremely happy with the support we received. CoreWeave’s Kubernetes-first cloud native ecosystem frees us from infrastructure worries and saves us time.”
CoreWeave has taken a unique approach to building its NVIDIA HGX H100 clusters in order to optimize performance for model training. Built on CoreWeave’s Kubernetes-native infrastructure, its clusters have a rail-optimized design using the NVIDIA Quantum-2 InfiniBand networking platform, providing 3.2Tbps of bandwidth per node. Additionally, CoreWeave’s NVIDIA HGX H100 infrastructure can scale up to 16,384 H100 SXM5 GPUs under the same InfiniBand Fat-Tree Non-Blocking fabric, providing access to a massively scalable cluster of the world’s most performant and deeply supported model training accelerators. What’s more, CoreWeave Cloud integrates NVIDIA BlueField data processing units (DPUs) to enable secure and elastic provisioning of H100 instances.
“Focusing on AI innovation and AI-first products to transform the future is key to creating a significant competitive advantage within the machine learning industry,” said Ian Buck, vice president of hyperscale and high performance computing at NVIDIA. “NVIDIA’s collaboration with innovators such as CoreWeave enables developers with cutting-edge NVIDIA HGX H100 GPUs to supercharge large language models (LLMs) and other AI workloads.”
Today’s announcement comes only months after CoreWeave’s November news that it would be among the first to offer cloud instances with NVIDIA HGX H100 supercomputing. CoreWeave’s collaboration with NVIDIA is critical to continue supporting the growth of large language models and other world-changing technology.
Latest ChatGPT Insights : OpenAI’s ChatGPT 4 Is Here. Is It Time to Forget ChatGPT?
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.