MLPerf Results Highlight More Capable ML Training
MLCommons suite demonstrates up to 1.8X better performance and new customer-driven object detection benchmark
MLCommons, an open engineering consortium, released new results from MLPerf Training v2.0, which measures the performance of training machine learning models. Training models faster empowers researchers to unlock new capabilities such as diagnosing tumors, automatic speech recognition or improving movie recommendations. The latest MLPerf Training results demonstrate broad industry participation and up to 1.8X greater performance ultimately paving the way for more capable intelligent systems to benefit society at large.
Latest Aithority Insights: Why Contextual Targeting Deserves Another Look with Artificial Intelligence (AI)
“We are especially excited about many of the novel software techniques highlighted in the latest round.”
The MLPerf Training benchmark suite comprises full system tests that stress machine learning models, software, and hardware for a broad range of applications. The open-source and peer-reviewed benchmark suite provides a level playing field for competition that drives innovation, performance, and energy-efficiency for the entire industry.
In this round, MLPerf Training added a new object detection benchmark that trains the new RetinaNet reference model on the larger and more diverse Open Images dataset. This new test more accurately reflects state-of-the-art ML training for applications like collision avoidance for vehicles and robotics, retail analytics, and many others.
AI and ML News: Why SMBs Shouldn’t Be Afraid of Artificial Intelligence (AI)
“I’m excited to release our new object detection benchmark, which was built based on extensive feedback from a customer advisory board and is an excellent tool for purchasing decisions, designing new accelerators and improving software,” said David Kanter, executive director of MLCommons.
The MLPerf Training v2.0 results include over 250 performance results from 21 different submitters including Azure, Baidu, Dell, Fujitsu, GIGABYTE, Google, Graphcore, HPE, Inspur, Intel-HabanaLabs, Lenovo, Nettrix, NVIDIA, Samsung, and Supermicro. In particular, MLCommons would like to congratulate first time MLPerf Training submitters ASUSTeK, CASIA, H3C, HazyResearch, Krai, and MosaicML.
“We are thrilled with the greater participation and the breadth, diversity, and performance of the MLPerf Training results,” said Eric Han, Co-Chair of the MLPerf Training Working Group. “We are especially excited about many of the novel software techniques highlighted in the latest round.”
Read More About AI News : AI Innovation Supports Rural and Remote Internet Connectivity
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.