Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

NVIDIA Hopper in Full Production

World’s Leading Computer Makers Dell Technologies, HPE, Lenovo, Supermicro, Plus Cloud Service Providers AWS, Google Cloud, Microsoft Azure, Oracle Cloud Infrastructure Building H100-Based Offerings; Availability Begins Next Month

NVIDIA announced that the NVIDIA H100 Tensor Core GPU is in full production, with global tech partners planning in October to roll out the first wave of products and services based on the groundbreaking NVIDIA Hopper architecture.

AI and ML News: Why SMBs Shouldn’t Be Afraid of Artificial Intelligence (AI)

Unveiled in April, H100 is built with 80 billion transistors and benefits from a range of technology breakthroughs. Among them are the powerful new Transformer Engine and an NVIDIA NVLink interconnect to accelerate the largest AI models, like advanced recommender systems and large language models, and to drive innovations in such fields as conversational AI and drug discovery.

“Hopper is the new engine of AI factories, processing and refining mountains of data to train models with trillions of parameters that are used to drive advances in language-based AI, robotics, healthcare and life sciences,” said Jensen Huang, founder and CEO of NVIDIA. “Hopper’s Transformer Engine boosts performance up to an order of magnitude, putting large-scale AI and HPC within reach of companies and researchers.”

In addition to Hopper’s architecture and Transformer Engine, several other key innovations power the H100 GPU to deliver the next massive leap in NVIDIA’s accelerated compute data center platform, including second-generation Multi-Instance GPU, confidential computing, fourth-generation NVIDIA NVLink and DPX Instructions.

A five-year license for the NVIDIA AI Enterprise software suite is now included with H100 for mainstream servers. This optimizes the development and deployment of AI workflows and ensures organizations have access to the AI frameworks and tools needed to build AI chatbots, recommendation engines, vision AI and more.

Read More About AI News : Role of AI in Helping B2B companies that are Missing Out on Buyer Intent Data

Global Rollout of Hopper
H100 enables companies to slash costs for deploying AI, delivering the same AI performance with 3.5x more energy efficiency and 3x lower total cost of ownership, while using 5x fewer server nodes over the previous generation.

Related Posts
1 of 41,052

For customers who want to immediately try the new technology, NVIDIA announced that H100 on Dell PowerEdge servers is now available on NVIDIA LaunchPad, which provides free hands-on labs, giving companies access to the latest hardware and NVIDIA AI software.

Customers can also begin ordering NVIDIA DGX™ H100 systems, which include eight H100 GPUs and deliver 32 petaflops of performance at FP8 precision. NVIDIA Base Command™ and NVIDIA AI Enterprise software power every DGX system, enabling deployments from a single node to an NVIDIA DGX SuperPOD™ supporting advanced AI development of large language models and other massive workloads.

H100-powered systems from the world’s leading computer makers are expected to ship in the coming weeks, with over 50 server models in the market by the end of the year and dozens more in the first half of 2023. Partners building systems include Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro.

Additionally, some of the world’s leading higher education and research institutions will be using H100 to power their next-generation supercomputers. Among them are the Barcelona Supercomputing Center, Los Alamos National Lab, Swiss National Supercomputing Centre (CSCS), Texas Advanced Computing Center and the University of Tsukuba.

H100 Coming to the Cloud
Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first to deploy H100-based instances in the cloud starting next year.

“We look forward to enabling the next generation of AI models on the latest H100 GPUs in Microsoft Azure,” said Nidhi Chappell, general manager of Azure AI Infrastructure. “With the advancements in Hopper architecture coupled with our investments in Azure AI supercomputing, we’ll be able to help accelerate the development of AI worldwide”

“By offering our customers the latest H100 GPUs from NVIDIA, we’re helping them accelerate their most complex machine learning and HPC workloads,” said Karan Batta, vice president of product management at Oracle Cloud Infrastructure. “Additionally, using NVIDIA’s next generation of H100 GPUs allows us to support our demanding internal workloads and helps our mutual customers with breakthroughs across healthcare, autonomous vehicles, robotics and IoT.”

Future of AI-driven Customer Relationship:  Microsoft’s Viva Sales and the Future of AI-driven Customer Relationship and Experience Management

[To share your insights with us, please write to sghosh@martechseries.com] 

Comments are closed.