MLPerf Results Show Advances in Machine Learning Inference
MLCommons establishes a new record with over 5,300 performance results and 2,400 power measurement results, 1.37X and 1.09X more than the previous round.
The open engineering consortium MLCommons announced fresh results from MLPerf Inference v2.1, which analyzes the performance of inference – the application of a trained machine learning model to new data. Inference allows for the intelligent enhancement of a vast array of applications and systems. This round established new benchmarks with nearly 5,300 performance results and 2,400 power measures, 1.37X and 1.09X more than the previous round, respectively, reflecting the community’s vigor.
MLPerf benchmarks are comprehensive system tests that stress machine learning models, software, and hardware, and optionally monitor energy consumption. The open-source and peer-reviewed benchmark suites level the playing ground for competitiveness, which fosters innovation, performance, and energy efficiency for the whole sector.
“We are very excited with the growth in the ML community and welcome new submitters across the globe such as Biren, Moffett AI, Neural Magic, and SAPEON,” said MLCommons Executive Director David Kanter. “The exciting new architectures all demonstrate the creativity and innovation in the industry designed to create greater AI functionality that will bring new and exciting capability to business and consumers alike.”
The MLPerf Inference benchmarks are focused on datacenter and edge systems, and Alibaba, ASUSTeK, Azure, Biren, Dell, Fujitsu, GIGABYTE, H3C, HPE, Inspur, Intel, Krai, Lenovo, Moffett, Nettrix, Neural Magic, NVIDIA, OctoML, Qualcomm Technologies, Inc., SAPEON, and Supermicro are among the contributors to the submission round.
[To share your insights with us, please write to email@example.com]