Stephen Hawking, Elon Musk, Steve Wozniak and 150 others, recently signed onto a letter calling for a ban on the application of artificial intelligence (AI) to advanced weapons systems.
Hawking says the potential threat from artificial intelligence isn’t just a far-off “Terminator”-style nightmare. He’s already pointing to signs that AI is going down the wrong track.
“Governments seem to be engaged in an AI arms race, designing planes and weapons with intelligent technologies. The funding for projects directly beneficial to the human race, such as improved medical screening, seems a somewhat lower priority,” Hawking said.
Does there really exist an AI arms race?
Artificially intelligent systems continue to develop rapidly. Self-driving cars are being developed to dominate our roads; smartphones are beginning to respond to our queries and manage our schedules in real-time; robots are becoming better at getting up when they fall over. It is obvious that these technologies will only benefit humans going forward. But then all dystopian sci-fi stories begin like that.
Having said that, there are two sides of the story. Assuming that Siri or Cortana would turn into murderous HAL from 2001: A Space Odyssey is an extreme but then supposing that AI being a threat to mankind is decades away and does not need intervention is also an extreme.
A recent survey of leading AI researchers by TechEmergence listed various concerns about the security dangers of AI in a much more realistic manner. The survey suggested that in a 20-year timeframe, financial systems will see a meltdown as algorithms begin to interact unexpectedly. It also outlined the potential for AI to help malicious actors optimise biotechnological weapons.
However, unlike previous autonomous weapons, such as landmines, which were indiscriminate in their targeting, smart AI weapons might limit the potential for deaths of soldiers and civilians alike.
However, when ground breaking weapons technology is no longer confined to a few large militaries, non-proliferation efforts become much more difficult.
The scariest aspect of the Cold War was the nuclear arms race. At its peak, the US and Russia held over 70,000 nuclear weapons and only a fraction of it, if used, could have killed every person on earth.
As the race to create increasingly powerful artificial intelligence accelerates, and as governments continue to test AI capabilities in weapons, many experts have started to worry that an equally terrifying AI arms race may already be under way.
For a matter of fact, at the end of 2015, the Pentagon requested $12-$15 billion for AI and autonomous weaponry for the 2017 budget, and the Deputy Defence Secretary at the time, Robert Work, admitted that he wanted “our competitors to wonder what’s behind the black curtain.” Work also said that the new technologies were “aimed at ensuring a continued military edge over China and Russia,” as quoted by Elon Musk’s Future of Life Foundation.
The defence industry is gradually shifting towards integrating AI into the robots they build for military applications. For instance, many militaries globally have deployed unmanned autonomous vehicles for reconnaissance (such as detecting anti-ship mines in littoral waters), monitoring coastal waters for adversaries (like pirate ships), and precision air strikes on evasive targets.
According to reports, the maker of the famous AK-47 rifle is building “a range of products based on neural networks,” including a “fully automated combat module” that can identify and shoot at its targets. It’s the latest illustration of how the U.S. and Russia differ as they develop artificial intelligence and robotics for warfare.
Besides, China is also eyeing the use of a high level of artificial intelligence and automation for its next generation of cruise missiles, reports have suggested.
It is not just the U.S., Russia and China that are developing its AI to be used in the defence, India too is not lagging behind.
CAIR has been working on a project to develop a Multi Agent Robotics Framework (MARF), which will equip India’s armed forces with an array of robots. The AI-powered multi-layered architecture will be capable of providing a multitude of military applications and will enable collaboration amongst a team of various robots that the Indian Army has already built — Wheeled Robot with Passive Suspension, Snake Robot, Legged Robot, Wall-Climbing Robot, and Robot Sentry, among others.
However, the robotics race right now is causing a massive brain drain from militaries into the commercial world. The most talented minds are now being drawn towards the private sector. Google’s AI budget would be the envy.
Sooner or later, it will become trivially easy for organized criminal gangs or terrorist groups to construct devices such as assassination drones. Indeed, it is likely that given time, any AI capability can be weaponized.
What are the concerns?
Non-proliferation challenges: Prominent scholars including Stuart Russell have issued a call for action to avoid “potential pitfalls” in the development of AI that has been backed by leading technologists including Elon Musk, Steve Wozniak and Bill Gates.
One high-profile pitfall could be “lethal autonomous weapons systems” (LAWS) or “killer robots”.
The U.N. Human Rights Council has called for a moratorium on the further development of LAWS, while other activist groups and campaigns have advocated for a full ban, comparing it with chemical and biological weapons, which is unacceptable.
Control: Is it man vs. machine or man with machine? Can AI when fully developed, be controlled? The reassurance is too early to come from creators of AI but again, thinking it is too early to contemplate is ignorance.
Hacking: When developed, will AI systems not be vulnerable to hacking? While we cannot overlook the fact that the benefits of AI are much more than the potential risks involved, developers should work on systems that will reduce the risks involved.
Targeting: Should it be compulsory for humans to always make the final decision with AI in the picture? Are we really ready for a fully autonomous system? Standards could be established that specify the required certainty and the specific scenarios when an AI would be allowed to proceed without human intervention. It may also be that an AI equipped with only non-lethal weapons can achieve nearly all the benefits with sufficiently reduced risk.
Mistakes: In all probability, AI weapons will make mistakes. But humans most certainly will. A well designed and tested machine is almost always more reliable than humans. AI weapons systems can be held to strict standards for design and testing. Indeed, this should be a priority in the development of AI systems.
Liability: Assuming there will be mistakes, the AI itself will not be liable. So who is? If the autonomous vehicles industry is any indication, companies designing AI may be willing to accept liability but their motivations may not align perfectly with those of society as a whole.
The way forward:
Many AI applications have huge potential to make human life better and holding back its development is undesirable and possibly unworkable. Moreover, if you take a look at the research being carried out on AI, you will realise that all projects are in their infancy and restricting their development is almost not required.
But it also does speak the need for a more connected and coordinated multi-stakeholder effort to create norms, protocols, and mechanisms for the oversight and governance of AI.
There is bare minimum support from global governments to fully ban the creation of killer robots. The simple reason being, there is still a long time before LAWS could be a reality. Take for example this, it would be impractical to prevent a terrorist group like ISIS from developing killer robots unless states can be assured of understanding the technology themselves first.
The core idea behind regulation is to maximise benefits while simultaneously minimising risks involved.
Above all, there is a need to recognise that humanity stands at a point, with innovations in AI outpacing evolution in norms, protocols and governance mechanisms. Regulation just has to make sure the outlandish, dystopian futures remain firmly in the realm of fiction.
Try deep learning using MATLAB