MITB Banner

How MLPerf Will Propel A New Benchmarking Era For Machine Learning

Share

Machine Learning has found widespread applications in almost every tech domain today, and has made a lasting impact in innovating the future. With ML tools and techniques growing day by day, it becomes difficult to assess the quality of problem-solving capabilities and practicality of ML. It may not always lead to expectations on a business scale. Thanks to ongoing research, evaluating the context and performance of machine learning is within reach.

When it comes to the performance of ML applications, they are not always looked deeper due to technical complexities along all the process. To address this problem, a new benchmarking software suite called MLPerf has been developed with the aim of “measuring the speed of ML software and hardware”. The software was developed by academics from Harvard University and Stanford University, in collaboration with major tech companies such as Google, Intel, AMD among others.

Measuring ML Performance

Since ML and AI applications are developing fast, it will necessitate the need for better hardware and software and also require a benchmarking platform for ML. This is where MLPerf comes in. In fact, the inspiration to develop this software came from two standard benchmarking entities, Standard Performance Evaluation Corporation (SPEC) and Transaction Processing Council (TPC), that evaluate computing and database system requirements respectively.

Previously, machine learning applications implemented in tech companies were measured as per their preference with the assistance of third parties. This will be avoided with MLPerf, wherein every tech company agrees to a consensus when conducting a ML performance analysis ranging from applications in mobile devices to cloud services.

Furthermore, MLPerf employs conventions used from SPEC & TPC as well as other benchmark methods such as DeepBench by Baidu and DAWNBench by Stanford University.

MLPerf platform has a set of goals:

  • Accelerate progress in ML via fair and useful measurement
  • Serve both the commercial and research communities
  • Enable fair comparison of competing systems yet encourage innovation to improve the state-of-the-art of ML.
  • Enforce replicability to ensure reliable results
  • Keep benchmarking effort affordable so all can participate.

In order to get more people to work with MLPerf, the researchers have opensourced it so that it can be modified by the developer community for better results. It is available on GitHub. Since it is in initial stages — the release is dubbed alpha’ release by MLPerf and researchers aim to provide a stable software environment by the end of this year.

Areas Covered For Benchmarking

The software suite has seven areas of ML under its blanket to perform benchmarking tests.

  • Image classification
  • Speech Recognition
  • Object Detection
  • Translation
  • Recommendation
  • Sentiment Analysis
  • Reinforcement learning  

For each of these areas, reference implementation are once again available on GitHub. Reference implementations mean that the reference code has been implemented in at least one framework and ML model. Based on the above criterion, a dockerfile is created for benchmarking in a container. In addition, the reference implementations also provide scripts( usually in Python programming language) for downloading datasets as well as for training these datasets. The necessary documentation for running these scripts and ML models are also given in the end.

The hardware used for initial testing consisted of 16 CPUs, a Nvidia P100 GPU, Ubuntu OS, 600 GB of disk space and Cpython (version 2 or later) for reference implementation. The benchmarking performance was tested on these reference hardware and found to be slightly sluggish. However, the developers suggest that it will improve over time with faster and optimised hardware.

Why Is Benchmarking Important For Machine Learning?

Tech companies invest a lot on hardware along with pouring funds on research projects. Therefore, it is essential that these investments do not go sour. To ascertain the viability of hardware, benchmarking is done. With tech companies competitively focussing on ML, it is time they incorporate a standard benchmarking method for assessing ML performance too. This way ML can be improved over time and prove beneficial in the long run.

Conclusion

MLPerf is just the beginning of a new benchmarking era in ML and AI. It plans to ignite more ML research and insights along the process. Consequently, it will lead to a larger, better ML community exploring and experimenting diverse areas in ML. On the other hand, MLPerf should make sure that the software suite is made more affordable and user-friendly.

Share
Picture of Abhishek Sharma

Abhishek Sharma

I research and cover latest happenings in data science. My fervent interests are in latest technology and humor/comedy (an odd combination!). When I'm not busy reading on these subjects, you'll find me watching movies or playing badminton.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.