While the likes of NVIDIA have already defined the space for chip-making and powerful computers, back in India, startups like AlphaICs (Alpha Integrated Circuits) are vouching to introduce revolutionary changes in the world of high-performance computing and data centres using artificial intelligence.
Founded by Nagendra Nagaraja along with Prashant Trivedi, AlphaICs brings together perception and decision making to enable real AI at edge. Having worked in the chip-making industry for over 19 years, Nagaraja always wondered if AI could be used to design a chip and processor, and that’s what triggered him to put the founding stone along with Trivedi.
AlphaICs also boasts Vinod Dham, the “father of the Pentium chip” as one of its co-founder, who has been a key driver in chalking out the product plan at the company.
Analytics India Magazine caught up with Nagendra Nagaraja and Prashant Trivedi to get a gist of what AIphaIC is, how it stresses on AI 2.0, it AI processing platforms, growth plan in India and much more.
What is AlphaICs?
Founded in 2016, AlphaICs is all about designing AI chips. Nagaraja shares that this product is directed towards something called as AI 2.0, where they would be enabling next generation of AI with this series of products. “We essentially have two lines of products — one is meant for edge computing and the other is data centre. We use RAP processor architecture , now on FPGA which has new new programming model called as agent-based compute which will essentially power AI at the edge as well as data centre. We are supporting prospect customers with runtime, compilers, runtime libraries , debug and performance tool chains and simulators. We are also co-developing complex libraries with few select research institutes, for deployment of large hybrid AI systems”, he says.
He further shares that they are moving away from something like pure data science and create programming model with the agents where agents will interact with something live, and learn & provide intelligent compute. “This is the premise for AlphaICs and it is not only good for interactive learning but also can process labelled data. With this we have an inbuilt capacity for bringing revolutions for the next few decades,” he said.
Perception And Decision Making
While most GPUs today and the current draft of accelerators are primarily focused in perception domain, which is used to identify what an object is, AlphaICs chip promises to deliver exceptional results in decision making as well. Nagaraja explains, “For example, look at autonomous vehicles. GPUs are doing a pretty good job there with tasks like identifying cars or stop signs, among others. But our chip is capable of infusing a level of AI where it can not only perceive the situation but decide what to do next.”
On being asked if it is already under deployment, Nagaraja is quick to add that they have already made the FPGA demo boards and simulators and are currently working in the lab to test it in automotives currently. “We are also partnering with one of the big server makers of the world to see if it can work on those kinds of servers as well,” he said. While the chip and software stack is already done, they are trying to now perform an alpha testing with some of the customers.
“We also have a simulator for the processor and in the next few months we will be deploying this chip across the globe”, shared Nagaraja.
Understanding AI 2.0
According to the founders, AI 2.0 will be most outward-looking technology enabler, which moves applications beyond social networks and aggregators to intelligent machines exploring unknowns and generating huge amount of actionable data for benefit of human society. While the current AI systems are net consumers of the data with little data to end user and provide good business insights, AI 2.0 would be able to deliver much more.
“AI 2.0 is where the data is converted into some kind of policies or learning where it generates data to be used by servers. For instance, camera generates and converts images into data, which is a few megabits or terabytes. It converts that data into labels and policies at the edge, which will be actually used in the servers. While it is not necessary that it will transmit all the images and videos related to sensor, there is a problem with the approach that is is not practical.” says Nagaraja.
Adding to the challenges that exist today, he shares that even deepest of network has a limit about how much deep it could go to analyse this and in what capacity. This is where AI 2.0 comes into the picture, where it will convert data into intelligence at the source. And then what goes into the data centre is this intelligence, which will be processed by the system.
“We have few big data companies such as Google, Facebook and Microsoft that have most of the structured data and its advantageous to them to use deep learning, but overall it is not benefiting the society in a big way. What we are saying is that we can actually convert this data into information at the edge and there is an equitable exchange of data between edge and data centre. We are converting this big data concept into more distributed data and distributed intelligence which is more efficient than centralized intelligence,” he said.
Real AI Processing Platform
Nagaraja shares that RAP is something where their platform is based on a different programming model. “There are 12 chip makers who are venturing into machine learning. However, if you look at different programming models in the history of humankind, we are the ones who have created this unique programming model, which is based on agents,” he said.
He further said that if you look at GPU it is based on Kernels whereas their chips are based on agents. Nagaraja believes that it is their unique platform on which all AI applications would be running on in a few years. The platform could very well be supported with different frameworks such as TensorFlow, Torch or Kaffe, however, it currently supports TensorFlow and in another 3-4 months, it would be able to support all other frameworks.
“Not only is this unique as all the frameworks are supported on a platform which is agent-based computing but we also have libraries to accelerate this. We develop most of the libraries and we are currently adding more libraries. Once this would be complete, we would be involved more into optimization and deployment of these developments,” he said.
Use Cases Of The Chip
Dwelling onto some of the early use cases that they are targeting, Nagaraja confesses that biggest problem that they have today in AI is not about accelerating the matrix multiplication or accelerating the big data workload, but validating the data itself.
“For instance, in self-driving cars, if the data itself is not stationary, there is no point of having enabled data and accelerating after that. If we have models that don’t work in self-driving car or server workloads where we have a financial time series or streaming videos where labelled data don’t work, these are the use cases we are targeting. The architecture itself is suitable for learning quickly on these data without having too much of structural data,” he said.
“Our first use case would be with automotive, where we are building stack for automotive over the processor,” he added.
Some of the other avenues that they are targeting are financial sector, healthcare and other scenarios where they still don’t have structured data.
Nagaraja also shares that on the server side, they are targeting something such as HPC (high-performance computing) which requires very low latency but the GPU is not scalable due to its innate data-parallel architecture. “Our scheme for the data centre is to go with a locally-modelled-parallel and globally-data-paralleled architecture,” he said.
Competitive Edge Over Giants Like NVIDIA
Nagaraja is quick to add that compared to their competitors such as AMD, NVIDIA, Qualcomm, ARM and others, have an inherent limitation in their architecture which is kernel-based. “In kernel-based programming, the major limitation is that there is too much of communication between the host and the accelerator, and they don’t take away the workload. We compete with these players by giving a superior performance, lesser latency, more safety and usability. These are the parameters on which we are competing,” he said.
Suggesting why AlphaICs might have a longer run in the competition, Nagaraja said that it is very easy to program agent-based compute rather than programming conventional architecture, where there is no established programming model.
Growth Story So Far
They just completed their two processes in proto-mode and are testing it thoroughly to optimise their software stack. Once it is done, the team aims to focus on deployment, evaluation, customer acquisition and other marketing plans.
“What we are planning to do as next growth phase is to test that our design and concept is working. We also created a demo of these agents which are also being used to play games on the same processor. The next stage is the commercial deployment and commercial applications. Going forward we see that there will be more traction in the edge and server market compared to the current scenario,” said Trivedi.
Currently a team size of 23, they also aim to add more AI scientists and researchers who are passionate about math. They are also looking to recruit software engineers who can do parallel programming, write and optimize compiler and integrate a lot of these applications in the processor.
They are also seeking Series B funding. “We currently raised about 3.5 million dollars and are looking to raise 15-25 million dollars in the next round,” he said. They aim to use it to develop chips further and deploy marketing presence in South-East Asia.
“While currently we are targeting cars, robots, drones and industrial automation, going forward we would like to look at bigger applications such as healthcare and surveillance,” he said.
While they have an ambitious plan for the startup, the founders believe that to be able to witness a smooth run they still have to overcome certain challenges. “Recruiting, reaching out to global customers, consolidation and scaling the business are some of the challenges that we are eying to overcome right now,” said Nagaraja while signing off.
Try deep learning using MATLAB