As the field of application for artificial intelligence is growing, so is its threat landscape. Recent improvements in hardware indicate that AI algorithms are now able to surpass human accuracy in several aspects. However, the technological progress also poses unprecedented ethical challenges. Researchers and AI theorists believe that beside a clutch of economic benefits and global opportunities created by AI, the technology also poses global risks, which could surpass nuclear technology as well.
As research progresses in this field, scientific risk analyses suggest that high-potential damages resulting from AI should be taken very seriously — even if the probability of their occurrence was low. As progress in AI research makes it possible to replace swathe of human jobs with machines, this has also sparked fears about automation. Many economists are predicting that an increase in automation could lead to a massive decrease in employment within the next couple of decades. A research indicates that with automation, the global average living standard will also rise, and there is no guarantee that all people — or even the majority of people — will benefit from this.
In the wake of recent events such as crashes arising from driverless cars and biased algorithms
leading to skewed results, researchers are mulling the possibility of developing AI standards to prevent these risks. There are several questions to consider regarding this. For example, are the current social systems prepared for a future where the human workforce increasingly gives way to machines?
Let’s take a look at risks posed by AI:
1) Risks posed by autonomous weapons: Autonomous weapons have spawned an AI war of sorts may or may not require a human in the loop for its operation. If it falls into the hands of the wrong person, these weapons could easily cause mass casualties. Besides, experts believe an intense arms race can spark an AI war that also have fatal results. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control in such situations.
2) Digital manipulation and risk to society: According to a study published by 25 technical and public policy researchers from Oxford, Yale and Cambridge experts, the researchers sounded an alarm about the potential misuse of technology — particularly in terms of fake videos created with deep learning technology. The prescient warning came to a pass with DeepFake videos where celebrity faces were realistically put on different bodies. While this is just being used in the realm of celebrity videos, the same concept can be applied to political propaganda, as indicated by Jack Clark, head of policy at OpenAI.
3) Lack of standards such as in the case of driverless cars: As autonomous driving tech evolves, policymakers are rushing to develop safety standards to regulate the industry. With regards to driverless cars, researchers published a paper called Ethically Aligned Design: A Vision for Prioritizing Human Well Being with AI and Autonomous Systems to promote transparency standards in the industry, with an aim to find out how the decision was made. In a similar vein, IEEE launched Global initiative on Ethical Considerations in AI and Autonomous Systems in 2016 to ensure stakeholders involved in the design and development of autonomous and intelligent systems are empowered to prioritise ethical considerations so that these technologies are advanced for the benefit of humanity.
4) Overselling AI systems: There was a time when IBM’s Watson was pitched as an AI system that can overtake diagnostic skills of doctors. Positioned as a welcome development, the project was shelved by Texas’s MD Anderson Cancer Centre since the vendor wasn’t able to justify the complex pattern recognition tasks for cancer diagnosis.
5) Social inequality: MIT economics professor Erik Brynjolfsson warned that social inequality could rise sharply in the face of rapid technological progress. Automating a large swathe of jobs can rob a majority of section of employees of their jobs. In an attempt to counteract this development, Brynjolfsson suggest that limiting certain jobs to humans only. Also, automation may lead to stagnation of income which could sink to below sustenance level.
According to well-known AI theorist Nick Bostrom, there should be three principles governing the use and development of AI:
1) The functioning of an AI should be comprehensible
2) The outcome should be predictable. And the criteria should be achieved within a time frame so that one can react in time
3) AI systems should be impervious to manipulation
Try deep learning using MATLAB