Can Probabilistic computing advance artificial intelligence and machine learning? According to a research by Nanyang Technological University, Singapore, probabilistic computing is not new. But it is seeing renewed research because scientists are working to find ways to build chips that can handle errors and to a certain extent even imprecision.
According to Intel CTO Mike Mayberry, the original wave of AI is based on logic and this is based on writing down rules — known as ‘classical reasoning’. In probabilistic computing, the energy spent by the processing units is lowered, resulting in an increase of the probability that some operations might go wrong.
Defining Probabilistic Computing
According to USFCA, probabilistic computers take a simulation problem (forward) into an inference program (reverse). Berkeley headquartered Navia Systems that develops probabilistic computers defines the technology as best suited to making judgements in the presence of uncertainty just like traditional computing technology is to large-scale record keeping. The startup founded in 2007, emphasizes that unlike current computers built for logical deduction and precise arithmetic, in probabilistic computing, machines and programs are built to handle ambiguity and learn from experience.
Intel Bets Big On Probabilistic Computing
Intel is betting big on probabilistic computing as a major component to AI that would allow future systems to comprehend and compute with uncertainties inherent in natural data and will allow researchers to build computers capable of understanding, predicting and decision-making. Mayberry also noted in a post that a key barrier to AI today is that natural data fed to a computer is largely unstructured and ‘noisy’. Probabilistic computing can make computers efficient at dealing with probabilities at scale that is the key to transforming current systems and applications from advanced computational aids into intelligent partners for understanding and decision-making, he emphasised.
As part of the research, the leading chipmaker established Intel Strategic Research Alliance for Probabilistic Computing to foster research and partnership with academia and startup communities and brings innovations from lab to real world. The core areas the company wants to address are — benchmark applications, adversarial attack mitigations, probabilistic frameworks and software and hardware optimisation.
Other Companies Are Rethinking Computer Architecture
According to Los Alamos National Laboratory, probabilistic computing is paving the way for new architectures that can help in optimising speed over traditional methods on basic algorithms. LANL notes that probabilistic computing is both a challenge and an opportunity — challenging because the the massive reductions in feature size can lead to non-determinism, and it is a huge opportunity since it gives a way to explore richer space of algorithms, and researchers can reduce power consumption by certain probabilistic hardware methods.
Application of Probabilistic Computing In Deep Learning
One of the biggest advantages for researchers is to compute at a lower power than what current paradigms offer. Today, general purpose machines have gained a high degree of maturity and there are a bunch of use cases and applications such as audio and speech processing which require high computing power. As a result, research around innovating new hardware systems is gaining traction in academic and tech fields. Probabilistic nature of computation allows orders-of-magnitude speed-up and substantially reduced power consumption. According to Krishna V Palem, Professor of Computer Science, Electrical & Computer Engineering at the Department of Computer Science at Rice University, power is the main driver for high computing speed and the main question is how to make the battery last longer in embedded computing devices. Palem emphasised that computing can be modeled to deviate instead of taking a straight line approach, then we can control the meandering behavior of computing to the desired level and at the same time save energy at negligible cost.
AI-focused hardware has spawned an ecosystem of startups that are working to make AI operations smoother. Case in point is California-based Samba Nova Systems which is powering a new generation of computing by creating a new platform. According to news sources, this startup believes there is still room for disruption despite NVIDIA’s GPUs become the de facto standard for deep learning applications in the industry. The company which raised a $56 million funding in Round A wants to build a new generation of hardware that can work on any AI-focused device, be it a chip powering self-driving technology to even a server, news reports indicate.
Currently, the AI hardware market has seen large companies like Apple and Google push their specialised hardware to speed up tasks like computer vision, image recognition but AI and data analytics is no longer restricted to big tech firms only. Other startups operating in a similar area are Graphcore and China’s Horizon Robotics which are also plowing investment in hardware and giving a stiff competition to GPUs — the backbone of all intensive computational applications for AI-related technologies. Practically every large company from Facebook to Baidu invested in GPUs to fast-track their work around deep learning applications and train complex models. while in terms of efficiency, GPUs are pegged to be 10 times more efficient than CPUs, in terms of power consumption, NVIDIA claims GPUs are also driving energy efficiency in the computing industry.