MITB Banner

Hitting the Accelerator — A Data Science Leader’s Perspective on Getting More Value from AI Workloads

Share

Research in Deep Learning started as early as the 1960s, though the term itself was coined in 1986. With accurate predictions becoming the need of the hour, the amount of computing available and the massive data being collected, Deep Learning became the preferred algorithms at least over the last 5 years or so. As the complexity of problems arose, Deep Learning became the answer for problems that involved heavy datasets.

A few millions of rows of supervised learning could effectively be crunched by ensemble tree based algorithms itself. However, for problems like computer vision or speech-to- text, deep learning was the answer. Meanwhile, lesser complex data-driven problems with few hundreds of features and few millions of rows could very effectively solved by other non-linear algorithms.

Growing compute power has emerged as the primary enabler of Deep Learning

The compute power is one of the primary forces behind the adoption of non-linear models. With industry adopting more state-of-the-art models, it gives academia a boost to research on algorithms that can challenge the current performances. Another big reason for the advancement in the field is the huge volume of open source data available. Organisations have contributed heavily to the academia by opening up datasets which in turn results in more research and advancement in the field. The compute power that is available to university labs then accelerate the research and development in this field. There is also a significant shift in the way many organizations are moving beyond research and pilot projects when it comes to operationalizing AI.

Before you hit the accelerator: understand the class of problems GPU & CPUs are suited for

Data Scientists work — the speed and efficiency get heavily affected by compute power for building models. It also depends on the complexity of the problems. For a few million rows of problems, CPUs do a fair amount of good job. Again, for neural networks with shallow layers you could take it on with CPU itself. However, if the problems spans across training large scale images and speech, the need for GPU could be felt. In case of NLP — searches and search engines that are crunching may millions of documents, GPU becomes a must.

For the rest of the cases on either text or data mining, a lot can be achieved with CPU itself. However, in cases where data scientists have to work with extremely large data sets or extremely fast data streams such as clickstreams or business transactions, GPU provides more benefits. Currently, a lot of data and text mining problems can be solved well in the realm of CPUs. GPUs are typically used for images, speech and search kind of problems in text.

For all the hype surrounding AI, we have to duly understand that a lot of the algorithms and frameworks have been developed from a statistician’s field. Even now, there are times we tend to use simpler algorithms like regression and decision trees. The field exists to solve consumer and business problems.

Data Scientist’s toolchain requires a new set of tools to support ML workflows

In terms of software, the teams heavily use R or Python or Pyspark for feature processing and algorithms. On the data collection end, it is mostly Hive or MySQL. Model deployment is done in Spark clusters or through reconversion of the algorithms in languages like Java and JavaScript. Most organisations keep deployment outside of a data scientist’s role. In terms of hardware, it is usually dependent on the complexity of the problem. If you are just starting a team, one must start with good configuration CPU machines in the cloud, until the needs and the complexity of the problem can be completely ascertained. Then one can go for building a machine in the data center which fits the needs of the organisation.

When it comes to the infrastructure needs, it needs to match the needs of the organisation and the type of data science problems that need to be solved. For example, GPUs as highlighted earlier are super critical for certain classes of problems while the rest of the problems can be easily done with CPUs.

Today, infrastructure forms a core component of setting up and running Data Science team. Without the right infrastructure, one is under utilising the power of predictions and the human capital available.

Keeping in mind the changing AI workload requirements, silicon providers are building an end-to-end hardware and software solutions to enable data scientists to gain more value from data. As data science and algorithms continue to shift, AI tech major Intel has invested in hardware that keeps pace with more complex models and also  delivers more inference at the edge.

With this in mind, Intel enhanced  2nd Gen Intel® Xeon® Scalable processors to give the best flexibility for both AI and the vast range of data-centric workloads. 2nd Gen Intel® Xeon® Scalable processors promise an AI acceleration push, coupled with Intel® DL Boost – tailored for deep learning inferencing.

Intel also engaged with popular deep learning frameworks like TensorFlow* and MXNet* to deliver more optimizations and performance as the software continues to evolve along with the AI landscape. Optimizations across hardware and software have dramatically extended the capabilities of Intel® Xeon® Scalable platforms for deep learning, already resulting in 241X training improvement and 277X1 inference improvement across many popular frameworks.

The Bottomline

Without a doubt, Deep Learning is bringing big changes to the industry, especially on the hardware end where we see immense progress with a broad family of processors entering the market that can unlock more value from our workloads. These breakthroughs are opening up a new class of use cases in the deep learning area. Some of the major advancements that can be seen from hardware innovations are in automating data science and explaining the results of machine learning better. That said, over the long-term, these hardware innovations will also enable newer applications across sectors and also help in mainstreaming ML technology.

The article has been created in collaboration with Intel.


Product & Performance Information

1 Performance results are based on testing as of 06/15/2015 (v3 baseline), 05/29/2018 (241x) & 6/07/2018(277x) and may not reflect all publically available security updates. See configuration disclosure for details. No product can be absolutely secure.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/benchmarks.

Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document. Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase.

© 2019 Intel Corporation. Intel, the Intel logo, Intel Xeon and Intel Optane are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

Share
Picture of Mathangi Sri

Mathangi Sri

Mathangi Sri currently works as the Chief Data Officer at Yubi. She has 18+ years of proven track record in building world-class data science solutions and products. She has overall 20 patent grants in the area of intuitive customer experience and user profiles. She has recently published a book – “Practical Natural Language Processing with Python”. She also recently published a book with BPB Publications 'Capitalizing Data Science: A Guide to Unlocking the Power of Data for Your Business and Products.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.