MITB Banner

Why Tech Giants Are Pinning Their AI Strategy On Deep Learning Frameworks

Share

Deep Learning frameworks

Deep Learning frameworksThere’s one aspect that has affected the growth of deep learning research — the proliferation of deep learning frameworks. Popular Deep Learning frameworks such as TensorFlow (Google), PyTorch (one of the newest frameworks that is rapidly gaining popularity), Caffe, MXNet and Keras among others have helped DL researchers achieve human-level efficiencies on tasks such as facial recognition, image classification, object detection, sentiment detection among other tasks. While multiple frameworks for deep learning is great news for the developer community, it is also a part of the marketing pitch to get them to lock the developer base into other solutions (selling compute capability).  

  • Each of these frameworks was designed to solve a specific problem
  • After reaching a certain maturity, the frameworks were open sourced 

What started as an attempt to plug in internal requirements for projects has become a full-fledged strategy to improve and capitalise on the overall AI technology stack that comprises of algorithms, infrastructure and hardware. Given how AI is going to become a foundational technology, leading technology majors are laying down better AI infrastructure approaches by providing DL frameworks that provide reliability and ease of deployment. 

But while this approach is helping remove the stumbling blocks developers face when it comes to large-scale deployments, it has also become a go-to strategy for companies to monetise the resources (compute capability) required for deep learning. 

 AI industry takes a customised approach to hardware optimised for a specific framework

Frameworks is one part of the puzzle to own the entire AI technology stack. For example, Google’s TensorFlow, the most popular framework is optimised Tensor Processing Unit (machine learning accelerator) and the TPU is further designed for the cloud. This in a way, will help Google own the burgeoning cloud infrastructure ecosystem with GCP. 

Meanwhile, Facebook’s PyTorch pegged as one of the most unified AI frameworks works with a broad array of hardware solutions from NVIDIA, Intel, ARM and others. The compatibility with a range of hardware – chips and accelerators has led to PyTorch’s soaring popularity (one of the newest entrants in DL framework race). From Google to Microsoft, tech majors have added support for PyTorch on hardware and cloud which has made it to one of the best and most accessible platforms for building AI applications. 

On the other end of the spectrum, CNTK Microsoft Cognitive Toolkit from Microsoft hasn’t won much ground as compared to Facebook’s PyTorch or Google’s offering. 

MXNet by Apache is Amazon’s preferred deep learning framework and in certain cases, is also known to perform faster than TensorFlow. This framework delivers substantial speedups, especially when computations are performed on a GPU. Meanwhile, Amazon has also reportedly taken a DIY route and built its own chip which is used in its data centres. 

Need for consolidation across DL frameworks

While there are multiple frameworks with their own APIs, representations and execution engine, what they lack is interoperability. Another key roadblock is that not all frameworks support other computational devices across multiple machines. This means that distributed implementations are not possible with DL frameworks. As the AI ecosystem grows, companies should focus on building an interface that integrates well across frameworks and can be extended to different hardware as well. 

What’s required is a common interface and consolidation across different deep learning frameworks. In a similar vein, Intel is now trying to blunt the advance of AI incumbents – Google, AWS, Microsoft with its own set of software tools and specialised hardware. What the chip giant has proposed is a ‘platform that makes deep learning work everywhere’. Known as PlaidML, it is an advanced and portable tensor compiler for deep learning on end devices such as laptops. 

As per GitHub documentation, PlaidML sits below common machine learning frameworks such as Keras, ONNX and nGraph and allows developers to use any hardware supported by it. PlaidML works well on GPUs without the need for CUDA and delivers comparable performance, just like Nvidia hardware. When combined with nGraph compiler, it also substantially expands the deep learning capabilities and works well across Intel’s diverse hardware portfolio. 

Where’s the industry heading 

Interestingly, what fuelled the rise of deep learning was AlexNet, a convolutional neural network built in 2012 which was the winning entry in The ImageNet Large Scale Visual Recognition Challenge (ILSVRC). In 2012, Alex Krizhevsky released AlexNet which was a deeper version of LeNet (one of the first CNNs built in 1994 by AI pioneer Yann LeCun). 

Since then, the adoption of deep learning techniques across image, text, video and NLP has led to a massive growth of heterogeneous hardware, built from the ground up for specific applications. This specialised hardware, for example, FPGAs are expected to outperform GPUs for specific tasks and are optimised for one framework. 

Given how hardware has become the hotspot for innovation with semiconductor companies and tech giants chasing custom silicon, companies should work across building interoperability in frameworks and a shared infrastructure that allows developers to tune performance across different hardware and enables resource sharing as well. 

 

Share
Picture of Richa Bhatia

Richa Bhatia

Richa Bhatia is a seasoned journalist with six-years experience in reportage and news coverage and has had stints at Times of India and The Indian Express. She is an avid reader, mum to a feisty two-year-old and loves writing about the next-gen technology that is shaping our world.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.