This is the age of AI – experts and technologists can’t get enough of this statement. AI dominates every news and there’s so much talk about its potential impact and how researchers are hard at work to usher in the arrival of super-intelligent AI.
But amidst all this hubbub, one is missing the more important argument about the biggest technological transformation that is going to hit us all soon — what will happen, once machines outsmart us at all tasks, safety issues surrounding AI, what are the near-term and long-term risks of AI? Does AI have the potential to leapfrog humans into a more advanced state? Do we even know how to build human-level AI? Given that we are still far from computers outsmarting us yet, why so much talk and marketing overload about it?
In the words of Oxford philosopher Nick Bostrom, AI is the “world’s greatest opportunity, and its greatest threat”. The author of controversial book Superintelligence, and an expert on AI has has declared that machine intelligence holds great promise, but in our quest to hit the fast-forward button, it is important to maximise the chances of a good outcome. Bostrom, also the Director of Oxford Future of Humanity Institute who is often portrayed as a doomster stresses it is important to “build up the technology and also understand the science of how to predict and control advanced artificial agents”.
Physicist Max Tegmark Discusses The Paradox of AI
Prof Max Tegmark, MIT physicist and the President of Future of Life Institute, Boston-based research organization that studies global risk, with a focus on AI and author of Life 3.0 – Being Human in the Age of Artificial Intelligence has an interesting take on the subject. Instead of shying away from the most pertinent argument on the AI apocalypse, Tegmark is on a mission to galvanize researchers to start addressing important questions related to making AI more robust and transparent.
Given that global technology companies are on a path to build powerful AI systems, a thought should also be spared for the possible AI future and where these advancements would lead us. According to Tegmark, getting closer to human-level AI would only open a Pandora’s Box, and striving for an intelligent AI deserves a hard thinking.
Based on the views of futurists and AI researchers, Analytics India Magazine discusses the societal implications and the ultimate impact of AGI on society:
1) Align the goals of the AI with ours before it becomes super-intelligent: It’s a statement that Tegmark stresses consistently and he fears what will happen when strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. Given that both these possibilities could become a reality, artificial intelligence could unintentionally cause great harm. Research in using AI safely could pave the way to enjoy the benefits of AI ad avoid the pitfalls at the same time.
Can neuro-evolution be a solution? So how can one rule out the misuse of AI. Michigan State University AI researcher Arend Hintze wrote earlier about a novel approach he is developing dubbed neuro-evolution. The approach entails creating a virtual environment with digital creatures and their brains to perform complex tasks. Based on the performance, researchers can eliminate errors and improve the cognitive abilities of machines with each generation.
2) Avoiding the fear of unforeseen in AI research: Michigan State University AI researcher Arend Hintze recently wrote that it is the fear of uncertainty that plagues the scientific community. Hintze fears AI researchers could be falling into the same trap – the latest research from cognitive sciences is evolved into an algorithm and then added to an already existing system. In other words, when researchers try to build an AI without understanding its intelligence or cognition first, the consequences could be sometimes far-reaching.
While systems like AlphaGo & IBM Watson achieve impressive feats, they do not have world-changing consequences, notes Hintze. But as AI systems get increasingly sophisticated, we would assign them more responsibility, as a result the risk of consequences could rise.
3) Preventing AI from becoming a potential risk: What risks does AI pose to humanity, besides automating jobs. What would happen when Strong AI will transcend into every aspect of life and in the course of action destroy something very important to human life. Hintze poses a pertinent question in his article, — how can humans justify their existence in the face of superhuman intelligence?
Part of the problem could also be that AI researchers haven’t come up with a “clear idea of what we want AI to do or become”, notes Hintze. This could be because as the Michigan researcher asserts, could be because “we don’t know what AI is capable of”. And as researchers, we need to decide what the desired outcome of advanced AI should be.
4) Humanizing AI with human ethics: Now that robots are already working in manufacturing facilities, bussing around busy tables and sophisticated software is dishing out investment advice, automation would soon push people out of more jobs. The day is not far when AI would be able to perform many cognitive tasks that we thought only humans could do. Given this scenario, how can one develop a way to develop a transparent and trustworthy AI. Hintze argues this could be achieved by factoring in human ethics in the AI system such as altruism and kindness. By setting up a virtual environment, we can help machines demonstrate kindness, honesty and empathy, notes the researcher.
5) Hanson robot Sophia is a step towards building human-level AI: With the debut of Sophia, the Hanson robot, a citizen of Saudi Arabia, we don’t need more demonstration of how AI can surpass human intelligence someday. Sophia is our cue that claims of superhuman AI could become real soon and the Hanson robot has sparked a conversation about robot rights vis-à-vis human rights in the political community. For all its human-likeness, the robot has triggered questions about human ideologies could come into a conflict in an AI future. According to Nick Bostrom, “Progress in AI has been faster over the past two years than most people expected,” he says.
6) AI no longer a marketing overload: Bostrom reveals when his book Superintelligence was published, he predicted that it would take another decade for AI to beat a human player at chess. DeepMind’s AlphaGo accomplished the feat in just one year. This goes on to show that progress in AI has been faster than most researchers expected.
Today, AI has crossed from hype to reality with investing in AI technology staying at an all-time high with large-scale adoption across all sectors. Next-gen AI technologies such as deep learning is showing great promise in areas such as image recognition, face recognition and there is a robust ecosystem in place which has reduced the adoption barrier. IT giants like Google, Amazon, Facebook & Microsoft have open-sourced their AI tools, provided cloud-based platforms and forged tech partnerships to fuel AI adoption. With awareness of consumer AI at an all-time high, AI is no longer just a marketing buzz, it has real value in a wide range of areas.
According to the World Economic Forum, we are in the “third stage of evolutionary process”. This phase could prove to be disruptive and we would see the closest interaction between man and machine. The big push would come in another 20 years when mind and machine interaction would usher in a more productive economy. Researchers and futurists are not hostile to development of Human Level Machine Intelligence (HLMI), but they are wary about the ethical issues surrounding AI and are hard at work to devise strategies to address the AI control problem. The question is how can we best prepare for this wave of innovation and steps taken to regulate superhuman intelligence.
Try deep learning using MATLAB