It’s not the first time that Google has waded into the ethics territory regarding artificial intelligence. Last year in October, DeepMind, a Google property based in London, launched a new research unit called DeepMind Ethics & Society to complement their work in AI and applications. With the rapid advancement in technology, their new unit would explore and understand the real-world impacts of AI.
But now, following the recent backlash faced by Google over the use of its drone technology by US Department of Defence, Google has framed a set of ethical guidelines for the development of AI. According to a news report, Pentagon has been using Google’s vision technology to help drones interpret objects on the ground for better target. Reportedly, more than 3,000 Google employees signed a letter to company top brass Sundar Pichai, demanding the shelving of the US Defence Department project. Their chief reason was the misuse of AI technology which was leveraged to analyse drone footage to target people. In the face of the internal backlash and sustained criticism over the use of AI technology, the company let go of the million-dollar contract.
Now, in an attempt to fully embrace transparency, Google is seeking to firm up its ethical guidelines for the development of AI with the publication of a new framework designed to shape its advances in the field.
Ethical Use of AI Gains Traction
But it wasn’t just the use of drone technology that came under the scanner. Reportedly, American Consumer rights advocate John Simpson also decried the use of Alphabet’s driverless car unit Waymo, which did not define ethical implications in case of any harm caused by the cars. Often, Google’s algorithms have come under fire for throwing biased results due to inaccurate data.
This is not the first time that Google has found the need to set up ethical guidelines. During the acquisition of DeepMind, the company mulled setting up an ethical review board to guide technological development around those lines. In fact, ex-Google chief Eric Schmidt has always talked about AI as a tool for augmenting humanity and dismissed naysayers like Elon Musk over fear-mongering and misleading public opinion on AI.
How These Guidelines Affect The Industry
Defence contracts bring in a huge amount of money for industry giants, and Google’s withdrawal would have left the space open for other players like Amazon and Microsoft. Weaponized AI has been a burning topic and think tanks across the globe have talked about the implication of tech companies investing in building defence capabilities. Verge quoted Google chief scientist Fei Fei Li advising colleagues to omit the mention of weaponized AI at all costs. Google, on its part, reiterated that the technology was being used for non-offensive purpose only. One of the guidelines reported by media houses is a ban on the use of AI in weaponry.
Will Tech Leaders Part Ways With Defence Bodies?
Google CEO Sundar Pichai clarified in a blog post, “We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organisations and keep service members and civilians safe.”
Ban for killer machines is common rhetoric these days but if a leading industry player takes up the clarion call, would others follow suit? Our opinion is that these industry guidelines on the ethical use of AI for various applications could lead to a parting of ways for both defence departments and industry players. Does this mean international corporations would no longer work with defence organisations? There are a sizable number of use cases in defence that come under the bracket of civil, non-combative applications. While AI’s use in promoting weaponized AI may get restricted, thanks to the Google/Pentagon breakup, AI would still find use in national security applications that fall under the non-offensive category.
As military AI gains traction, there is a greater risk posed by super-intelligent machines and government bodies are forming councils to formulate policy on ethics around the usage of AI in military applications.
For example, the UK Government recently developed a report that collected feedback from 200 experts which called for “a shared ethical AI framework” and provided clarity against how the technology can best be used for society and individuals. In another case, Montreal Declaration on Responsible AI is trying to stimulate discussion on ethical guidelines, noting that AI should ultimately promote the well-being of all sentient creatures. The Treasury Board Secretariat of Canada is looking at the responsible use of AI in government.
Pichai puts it aptly by saying, “While this is how we’re choosing to approach AI, we understand there is room for many voices in this conversation. As AI technologies progress, we’ll work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will continue to share what we’ve learned to improve AI technologies and practices.”
Try deep learning using MATLAB