There is no denying that AI has become mainstream over the years and is affecting lives like never before. Better availability of data, software and the way it can be sewn together have given rise to artificial intelligence being implemented in our lives like never before. The problems that AI researchers struggled for decades are getting solved, thanks to the evolving technologies.
AI in our daily lives and its increasing affordability
While a true artificially intelligent system, that can learn on its own is still a distant reality, the pseudo AI technologies today cannot be underestimated. From deploying AI in using drones to answering voice based queries on smartphones, AI is more rampant than ways we can think of. The machine learning phase of the digital age has introduced us to the world of facial recognition, speech recognition, virtual video games, chatbot, recommendation engines, smart homes, security surveillance, self driving cars and much more. They can recognise what’s in a picture, what’s important to note in a document, carry image analysis and much more. For instance, Google Home is getting smart enough to distinguish between different users in a household, and the Alexas of the world are making your life easier.
The best part (or the worst?) is that AI and machine learning tech are becoming increasingly affordable. With its growing popularity, these technologies are becoming easy to replicate and become inexpensive. It is more available in every sense.
Given the fact that AI systems are efficient and scalable, it be easily trained and deployed into many systems, and can complete certain tasks more quickly and cheaply than humans could. In many ways AI systems can exceed human capabilities which has also been proven in games like Chess and GO where AI has defeated humans by a significant margin.
Also the fact that attackers may find it costly to obtain or reproduce the hardware associated with AI systems, such as powerful computers or drones, it is much easier to gain access to software and relevant scientific findings. Many new AI algorithms can be produced in a matter of days or weeks. The easy availability of online papers, often accompanied by source code makes it easier to be replicated. If not a complete success it can lead to partial diffusion in certain cases. AI, therefore can make more progress in every domain, in much lesser price.
Perils of affordable AI
While AI has been increasingly getting popular, there are concerns raised by experts on how it can be used maliciously. Some even suggesting that terrorists and criminals can use AI and ML to further their agenda. A group of AI researchers across US and Britain released a report suggesting that while AI is increasingly becoming affordable, it could be used for malicious purposes. They even suggested against spreading it, unless there is a good understanding of all its potential risks.
The study titled ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation’ carried out by researchers across institutions such as Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Open AI and others, suggests of potential risks that AI can pose. With the term malicious, the idea is to refer to such practices that are intended to compromise the security of individuals, groups or society by causing a breach of data and security.
The study suggests that it can compromise digital security, exploit human vulnerabilities such as speech synthesis for impersonation, existing software vulnerabilities such as automated hacking or the vulnerabilities of AI system through data poisoning.
It also cautions the technophiles against physical security such as attacks where physical systems can be automated, such as in the case of driverless cars. There may be chances where the data and algorithms of carrying out self driving activity may be altered causing serious harm, interfering with the way autonomous cars should work, causing worst case scenarios, such as crash.
Similar threats could be seen in case of remote operated drones, where a miscreant could swarm of thousands of micro-drones to fetch unwanted information. For instance, drone manufactured by a company called Skydio uses components available to anyone and is quite inexpensive. By using an app, it can be used to follow someone, track a particular event, thus comprising security, even in defence sector.
AI advances could also take inappropriate advantages of human behaviour, moods, and beliefs based on the available data. It is difficult to predict the way AI will learn from large amounts of data that it is exposed to. It can be vulnerable to manipulation, for instance computer vision algorithm can be fooled into seeing things that are not there. If a computer vision system can be fooled, miscreants can as well play with security cameras or compromise driverless cars. Deepfakes has become quite rampant, through which anyone’s head could be grafted onto a pornographic video or put words into the mouth of famous personalities. The machine learning created fake audio and videos are often difficult to distinguish from real thing.
Miles Brundage, one of the report’s primary authors says “This becomes a problem as these systems are widely deployed.” He added that it is something that we need to get ahead of.
How can it be prevented?
The use of artificial intelligence is so diluted with today’s developments that it is almost impossible to stop using the technology altogether. While there are potential threats as researchers have suggested, there can be ways to negate these threat, or at best reduce it.
Foremost thing is to have policymakers in place who can collaborate closely with technical researchers to investigate, prevent and mitigate potential malicious use of AI. Misuse related considerations should be put in place before carrying out any research. It is essential to foresee any harmful implications and set guidelines and potential measures to undo those threats.
The researchers also suggest that there is a need to identify and introduce best practices in research areas related to artificial intelligence to bring mature methods into the process such as computer security, data security and its restricted and regulated use. It is important to identify the crux of the problem and carry out measures to overcome them. Learning from cybersecurity community can be a best solution.
Lastly it is important to promote a culture of responsibility. AI researchers are at the critical position to shape the security landscape of AI enabled world. To have an ethical understanding is the key. With a guided AI research and developments, the world will be a better place to cherish and live, even with the much feared technology– the artificial intelligence.
Take Our Survey