MITB Banner

Race for Artificial Intelligence chips

Share

Once just a figment of the imagination of some our science fiction writers, artificial intelligence (AI) is taking root in our everyday lives. We’re still a few years away from having robots at our beck and call, but AI has already had a profound impact in more subtle ways. Weather forecasts, email spam filtering, Google’s search predictions, and voice recognition, such Apple’s Siri, are all examples. What these technologies have in common are machine-learning algorithms that enable them to react and respond in real time.

AI workloads are different from the calculations most of our current computers are built to perform. AI implies prediction, inference, and intuition. But the most creative machine learning algorithms are hamstrung by machines that can’t harness their power. Hence, if we’re to make great strides in AI, our hardware must change, too.

End of Moore’s Law

Moore’s Law, named after Intel co-founder Gordon Moore, states that the number of transistors that can be placed on an integrated circuit doubles roughly every two years. For decades, chipmakers have succeeded in shrinking chip geometries, allowing Moore’s Law to remain on track and consumers to get their hands on ever more powerful laptops, tablets, and smartphones.

But in next few years “transistors could get to a point where they could shrink no further.” While it will still be technically possible to make smaller chips, they will reach “the economic minimum” at which the costs will be too high to justify.

If you’re following the AI journey so far, you’ll see that we’ve sprinted out ahead using CNNs (Convolutional Neural Networks) and RNNs (Recurrent Neural Networks) but that progress beyond these applications is only now emerging.  The next wave of progress will come from Generative Adversarial Nets (GANs) and Reinforcement Learning, with some help thrown in from Question Answering Machines (QAMs) like Watson.

What we see shaping up is a three-way race for the future of AI based on completely different technologies.  Those are:

  1. High Performance Computing (HPC)
  2. Neuromorphic Computing (NC)
  3. Quantum Computing (QC).

Neuromorphic and quantum computing always seemed that they were years away.  The fact is however that there are commercial neuromorphic chips and also quantum computers in use today in operational machine learning roles. 

High Performance Computing

The path that everyone has been paying most attention to is high performance computing.  Stick to the Deep Neural Net architectures that we know, just make them faster and easier to access.

While Intel, Nvidia, and other traditional chip makers were rushing to capitalize on the new demand for GPUs, others like Google and Microsoft are busy developing proprietary chips of their own that make their own deep learning platforms a little faster or a little more desirable than others.

Google came up with TensorFlow as its powerful, general purpose solution combined with their newly announced proprietary chips, the TPU (Tensor Processing Unit).

Microsoft has been touting its use of non-proprietary FPGAs and just released an upgrade of its Cognitive Toolkit (CNTK).

Neuromorphic Computing

A new approach called neuromorphic computing seems to be picking up in last one decade. It seeks to leverage the brain’s strengths by using an architecture in which chips act like neurons. When the pulses or ‘spikes’ sent to a neuron reach a certain activation level, it sends a signal over the synapses to other neurons. Much of the action, however, happens in the synapses, which are ‘plastic,’ meaning that they can learn from these changes and store this new information. Unlike a conventional system with separate compute and memory, neuromorphic chips have lots of memory located very close to the compute engines.

The new brain-inspired architecture will enable machines to do things that silicon chips can’t.

Traditional chips are good at making precise calculations on any problem that can be expressed in numbers. A neuromorphic system can identify patterns in visual or auditory data, and adjust its predictions based on what it learns.

A research paper by Intel scientist Charles Augustine predicts that neuromorphic chips will be able to handle artificial intelligence tasks such as cognitive computing, adaptive artificial intelligence, sensing data, and associate memory. They will also use 15-300 times less energy than the best CMOS chips use.

That’s significant because today’s AI services, such as Siri and Alexa, depend on cloud-based computing in order to perform such feats as responding to a spoken question or command. Smartphones run on chips that simply don’t have the computing power to use the algorithms needed for AI, and even if they did they would instantly drain the phone’s battery.

Companies such as Intel, IBM, and Qualcomm are now involved in a high-stakes race to develop the first neuromorphic computer.

Neuromorphic Chips available in the market are IBM TrueNorth and Intel Loihi.

Quantum Computing

What really sets a quantum computer apart from a regular digital computer is the fundamental nature of how data is encoded via quantum properties like superposition or entanglement. A digital bit is either 0 or 1, but a quantum bit (or qubit) can be 0, 1 or a superposition of both states.

Quantum computers are an area of huge interest because, if they can be built at a large enough scale, they could rapidly solve problems that cannot be handled by traditional computers.

Quantum supremacy is a key milestone on the journey towards quantum computing. The idea is that if a quantum processor can be operated with low enough error rates, it could outperform a classical supercomputer on a well-defined computer science problem.

That’s why the biggest names in tech are racing ahead with quantum computing projects.

Year 2018 Intel announced its own 49-qubit quantum chip, code-named Tangle Lake. Post that Google’s Quantum AI Lab has shown off a new 72-qubit quantum processor called ‘Bristlecone‘.

Both Neuromorphic and Quantum computing are laying out competitive roadmaps for getting to deep learning and even newer versions of artificial intelligence faster and perhaps easier. Only time will tell which will get better than the other.

PS: The story was written using a keyboard.
Share
Picture of Nitin Srivastava

Nitin Srivastava

Nitin is a part of the AIM Writers Programme. He is an experienced data and analytics consultant. For the last two decades, he has been extensively working with large organisations in implementing and managing data warehouses and creating analytic solutions for various domains, predominantly in the BFSI sector.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India