What is conscience? The Oxford English Dictionary defines the philosophers’ favourite locution as, “The part of your mind that tells you whether your actions are right or wrong.” While this article isn’t intended to be an English lesson, this term is at the very heart of the ongoing discourse in the world of Artificial Intelligence.
When the subject of Artificial Intelligence-powered robots being granted rights turns up, the community is almost always divided. Online discussion websites and threads are mostly divided into two specific groups:
- Why should AI-powered beings, which are made to ease man’s burden be thwarted?
- How can man-made creations which are not a part of the evolutionary process have a conscience, and thus need rights?
Here we shall dissect the questions and understand the pros and cons of granting robots rights as well as an opportunity to justify its actions.
The Sophia Quandary:
The now famous humanoid robot Sophia, created by the Hong Kong-based company Hanson Robotics, was granted citizenship in Saudi Arabia in October this year. But many people were offended, even outraged by it, because now the robot had more rights than women in the country which is believed to be the birthplace of Islam.
But technically, if a person (or creation) is a “citizen” of a particular country, won’t he/she have rights there? The right to free will, the right to life, and maybe even the right to vote?
Benjamin Kuipers, professor of computer science and engineering at the University of Michigan had a unique take on the predicament:
“A human being is a unique and irreplaceable individual with a finite lifespan. Robots (and other AIs) are computational systems, and can be backed up, stored, retrieved, or duplicated, even into new hardware. A robot is neither unique nor irreplaceable. Even if robots reach a level of cognitive capability (including self-awareness and consciousness) equal to humans, it is not at all clear what this means for the ‘rights’ of such ‘persons’.
We already face, but mostly avoid, questions like these about the rights and responsibilities of corporations. A well-known problem with corporate ‘personhood’ is that it is used to deflect responsibility for misdeeds from individual humans to the corporation.”
But Professor Kuipers’ statement doesn’t take into account the very human nature to anthropomorphize robots because of popular psyche, and basically an inherent nature to relate and protect.
Clouding The Judgement Without Context:
The word ‘right’ is being flung about this issue without much thought put into it. Because one of the key things lacking in the discourse of robot rights is the context.
Robots, or any AI-powered beings for that matter, are made with the intention of helping out humans in one way or the other. For example, if, in a country like India, robots are built to clean sewers — a job that greatly violates many human rights in one go — would it be immoral to use them? On the other hand, what if a baby-shaped robot built to help expecting mothers is made to clean the same sewer? Would that be OK?
MIT Media Lab researcher and robot ethics expert Kate Darling, argued in her paper ‘Extending Legal Rights to Social Robots’ that one of the key reasons to talk about rights for AI-powered beings was to “protect societal values”.
She has used the example of parents who tell their child not to kick a robotic pet. Of course, one can assume it’s just a toy and that the parents don’t want to spend for another one, but it also reinforces bad behaviour in the child. A kid who kicks a robot dog might be more likely to kick a real dog or another kid. Nobody wants to perpetuate destruction or violence, regardless of who or what is on the receiving end.
Professor Hussein A Abbass of University of South Wales-Canberra also has an interesting take on the subject:
“Should robots be given rights? Yes. Humanity has obligations toward our ecosystem and social system. Robots will be part of both systems. We are morally obliged to protect them, design them to protect themselves against misuse, and to be morally harmonised with humanity. There is a whole stack of rights they should be given, here are two: The right to be protected by our legal and ethical system, and the right to be designed to be trustworthy; that is, technologically fit-for-purpose and cognitively and socially compatible (safe, ethically and legally aware, etc.)”
Is The Finger To Be Blamed For Pulling The Trigger:
Recently, a new short film called, Slaughterbots had created waves on social media due to its dystopian depiction of AI-based weapons. It featured tiny killer drones, not unlike the automated bees in a Black Mirror episode. The film revolves around the concept of thwarting “bad people” right at its birth point, that is, on social media, and how things pan out when AI makes decision instead of humans.
But the moral question about giving the robots (or drones in the example quoted above) a right to justify their actions also arises. Are the robots to be blamed for the killings — or rather using their ‘intelligence’ to take decision? Or the humans who have programmed it in such a way?
Another palatable example can be that of a self-driving car, which is presented with the dilemma of swerving into a crowd of 10 people to save the owner, or crashing into a wall to save the people. The quandary is not only ethical but also commercial: would you buy a car programmed to kill you under certain circumstances?
Try deep learning using MATLAB