It is one of the greatest technological breakthroughs of all times and has now seeped into our public consciousness as well. The year 2017 proved to be a phenomenal year for the tech world’s hottest buzzword Artificial Intelligence which saw a groundswell in research led by tech giants such as Google, Baidu, Facebook, Amazon and Microsoft among others and has also emerged as a vibrant academic field. The buzz was so high around AI that it even grabbed the attention of policymakers across the globe with developed nations such as China, USA, UK, Canada, Russia, Singapore and UAE clamoring it as the next growth engine – hoping its success could make them an AI superpower.
Even India, not to lag behind in the AI race set up its own AI Task Force with a mission to “create a policy and legal framework to accelerate deployment of AI technologies”. With a lofty goal of making India an AI-rich economy, the government is seeking a framework to embed AI in the economic, political and legal system.
AI’s Conundrum – One-trick pony ANI proves its worth in business
Despite the general advancements and the current hype, misinformation around technology terms abound. Most of the AI-led innovations today fall under the bracket of Artificial Narrow Intelligence (ANI). Apple’s face recognition technology Face ID to the smooth voice of Siri & Cortana, chatbots, Google Assistant, Google Photos app and DeepMind’s board game Go are all perfect examples of ANI – a result of brute force statistics, made possible by the quantity of data fed into the models and trained on a huge, specific dataset to accomplish one task.
These AI systems are dubbed “one-trick ponies” since they are designed to perform one task only according to a financial outlet. Despite the human-level accuracy of tasks, a body of researchers believe that AI is yet to make a big leap in technology, from a qualitative one to a quantitative one. For example, DeepMind’s program can play the ancient Chinese game Go exceptionally well, but it doesn’t imply that it can also find a cure for cancer at the same time, says Moscow-based AI Researcher Roman Trusov who works at XIX.ai. So, for all its promise and excitement, ANI has its own limitations.
In this article, we lay down the difference between ANI & AGI:
Artificial Narrow Intelligence: According to Frank Chen, Partner at Andreessen Horowitz, a venture capital firm, the everyday use of AI falls within the category of ANI (Artificial Narrow Intelligence) which works only in a pre-defined range or on one task. What we see right now in the world is broadly known as Narrow AI, which helps in making better predictions, developing a chat interface to understand customers better and helps in making data-driven decisions across the entire product lifecycle.
So far, ANI has worked well for businesses and has proved valuable. VCs are hard at work to identify startups which are using narrow AI techniques to solve business problems. But for all its promise, ANI has its own setbacks and is not considered a game-changer.
- Data Dependency: AI’s data dependency has been well-chronicled — the technology is known for its increasing capacity for data-crunching since big amount of labelled data leads to accurate results. According to an observation by Bloomberg, this is ANI’s biggest setback to future development. Several new research studies are already underway that counter the current approach. Out there, the consensus is that more the data, smarter the AI since the amount of training data has a direct impact on performance.
- Performance degrades with small changes: Even though ANI has shown great promise in business applications performance of Narrow AI systems can degrade significantly if the task is even slightly changed. For example, if a self-driving car fails on the road autonomously it usually means that it will either jump into safe-mode or screech to a halt.
- Machines exhibit human-level accuracy in a set of tasks only: Machines have matched human-level accuracy in a set of tasks such as recognizing speech, text and parsing the structure of sentences, mining legal documents and translating stories from German into English. According to MIRI, AI is good at certain tasks and mediocre at another set of tasks
- ANI is narrow because we can define the scope of output: In contrast to AGI, ANI presents few risks since in ANI, researchers can define the scope of the output and is used primarily from a utilitarian perspective. ANI systems will only do tasks they are programmed to do.
- ANIs give scope of control: By controlling the end goal, researchers gain a measure of control in ANI systems. The ANI system will just be programmed to achieve its goal and will not pursue unintended actions while an AGI will also achieve its goal and can also deviate to follow activities that can cause risk to humankind.
AGI – Ushering in the arrival of super-intelligence
AGI, as well know ushers in the arrival of superintelligence, it is also referred as Human-level AI. According to Machine Intelligence Research Institute,
- AGI by and large means “the ability to achieve complex goals in a difficult environments with limited computational resources.”
- Another idea often echoed along with AGI is the ability to transfer learning from one domain to other domain.
Lack of research agenda in AGI: If we look back at history, few leading scientists believed that human-level performance at chess could signal the arrival of AGI and bring us within a shouting distance of achieving AGI. But, machines beat humans at chess more than a decade back and we are yet to achieve Human Level AI. The reason for the exponential time-lag could be that we lack a research agenda for AGI. Also, given that ANI has proven its business value, researchers at private tech organizations are channeling their efforts on improving ANI. According to research analyst Luke Muehlhauser Google’s self-driving project could have signalled the arrival of AGI but even that was achieved through a series of cheats (mapping all the roads, parking lots, ramps).
Is Strong AI a possibility & are we on a path to achieve it?
Recent developments in AI community (Singularity.net & Mindfire) indicate the best approach to achieving AGI will be mapping out how the brain does it first. According to tech entrepreneur Todd McKissick, machine learning researchers cannot create AGI without understanding exactly how the brain works. According to an tech expert, AGI will be able to replicate limited sub-functions of the human brain, but not higher cognition such as logic and narratives.
According to AI researcher Trusov, there are two approaches to AGI:
According to the abstract reasoning model, the algorithms have to go beyond learning context-dependent tasks and understand the relationship between abstractions. In the, Vinograd model, one has to construct a controlled environment that presents different tasks to acting agent. The idea is to use various tasks to train the algorithm that finally would be able to solve anything within that environment. In fact, the OpenAI Gym, an example of Vinograd model attempts to create a proper learning environment.
The Road from ANI to AGI is hard
Exponential growth in computational power and ongoing research in Brain Computer Interface could fuel development of AGI unexpectedly. The research community believes that current models for getting to AGI involve the AI getting there by self-improvement. These gradual leaps could help it soar in intelligence and even reach super intelligent level soon. According to futurist Ray Kurzweil, artificial superintelligence is just round the corner and we are on a steady march towards Singularity.
Try deep learning using MATLAB