MITB Banner

Small-Scale Accelerators In Machine Learning: A Brief Overview

Share

diannao-bn

diannao-bn

As machine learning and artificial intelligence pervade the computing environment, the drive for better hardware resources is increasing significantly. Although computational hardware is optimised to its best, it needs to be the perfect fit for ML applications.

With a plethora of devices available in the market — multi-core processors, large cloud-based databases — it is often tough to choose them to serve the exact ML purpose. One such hardware component that has picked up popularity in recent times is the accelerator. The accelerators are a class of microprocessors which are designed specifically to serve AI and ML related tasks.

In this article, we will discuss a particular type of accelerator — developed by researchers at Institute of Computing Technology (ICT), China — which is embedded on a powerful processor and has proven to be energy-efficient.  

Accelerator And Its Design

A certain set of ML algorithms such as convolutional neural networks and deep neural networks are being gradually deployed across most self-learning applications. These algorithms require powerful computing resources in order to perform efficiently. Currently, accelerators such as Graphical Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs) compute ML algorithms with complex neural networks.

However, these hardware components focus on the implementation of algorithms rather than look at the effect that these algorithms have on memory and processing speed. ML algorithms such as neural networks may eventually grow in size and become more complex if modifications are made over time. As a result, it presents a computation challenge. This necessitates demand for a flexible accelerator design that would accommodate changes, both in terms of scalability and efficiency of ML projects especially when it comes to algorithms that involve large neural networks.

Researchers at ICT kept all these factors in mind to design a novel accelerator. Most importantly, the design incorporates high performance for a small area (a microprocessor chip) consuming less power and leaving a small energy footprint. Hence, the focus on the design is more on memory rather than computation.

Using Processors For Design

Large neural networks (NN) and similar ML algorithms typically involve more memory traffic during its working. It is essential to design accelerators layerwise for these networks to make the most out of the performance. In the design study by Tianshi Chen and others from ICT, China, they consider processor-based implementations and apply locality analysis to every layer in the network. They benchmark the performance on four convolutional neural networks, CLASS1, CONV3, CONV5 and POOL5 and assess the bandwidth impact they have on the memory. In the researchers’ words:

“We use a cache simulator plugged to a virtual computational structure on which we make no assumption except that it is capable of processing Tn neurons with Ti synapses each every cycle. The cache hierarchy is inspired by Intel Core i7: L1 is 32KB, 64-byte line, 8-way; the optional L2 is 2MB, 64-byte, 8-way. Unlike the Core i7, we assume the caches have enough banks/ports to serve Tn × 4 bytes for input neurons, and Tn×Ti ×4 bytes for synapses. For large Tn, Ti, the cost of such caches can be prohibitive, but it is only used for our limit study of locality and bandwidth.”

This is again experimented along three categories of NNs — classifiers, convolutional layers and pooling layers. Convolutional layers fare optimally in terms of synapses and neuron balance in line with performance. They produce unique synapses and is not reused again by neurons. Therefore, convolutional layers offer more memory bandwidth compared to the other NNs.

Accelerator In NNs

The NNs are implemented on a hardware and are matched with conceptual representation of these networks mentioned earlier. The neurons form the logic circuits and the synapses form the RAM or memory. These components are now integrated into embedded system applications for quicker performance with less power consumption. Similarly for larger and complex NN, buffers are present in between the neurons to compensate for data control and temporary storage. These are again connected to a computational sub-system to compute neurons and synapses (in the study, its referred to as Neural Functional Unit and the control logic).

Therefore, the accelerator consists of neurons, synapses, input (NBin) & output (NBout) buffers for input and output neurons respectively, synaptic weights (SB) and a computational sub-system. The typical accelerator architecture is given below:

Figure: Accelerator architecture with direct memory access (DMA) – Image by Tianshi Chen et.al.

After all these processes, it is tested on three tools namely accelerator simulator, CAD tools and single instruction, multiple data (SIMD) computers. The first two tools are for exploring and simulating the accelerator architecture. The latter one assesses energy and memory in the accelerator. It was observed to be 100 times faster in performance on a 128-bit 2GHz SIMD core, with energy reduction by 21 times compared to a standard multi-core processor.

Conclusion

The accelerator mentioned here can be implemented on a broader set of ML algorithms. All it needs is due diligence with respect to NN layers, storage structures and ML parameters. One particular point to be noted is that this accelerator performed well with a high throughput in a very small processor area. This means as ML implementations grow bigger, hardware structure complexities can be brought down with innovations.

Share
Picture of Abhishek Sharma

Abhishek Sharma

I research and cover latest happenings in data science. My fervent interests are in latest technology and humor/comedy (an odd combination!). When I'm not busy reading on these subjects, you'll find me watching movies or playing badminton.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.