Researchers across the globe are working extensively towards achieving an artificially intelligent system that can behave in an ethically and morally right manner. Morality, or the nature of distinguishing good from the bad, is an important human trait which researchers are now obsessively looking to infuse into machines.
But, why are humans obsessing over it? Is that even a true human trait? Hasn’t history shown us that humans are capable of things worse than any AI could possibly do?
The Moral Machine
The concerns over morality often arise while talking about AI in areas like self-driving cars. Who dies in the car crash? Should it protect the passengers or passers-by? The Moral Machine, an initiative by the Massachusetts Institute of Technology gathers a perspective on moral decisions made by AI and machine learning. As a part of this initiative, participants could give their opinions on what AI in cars should do when confronted with a moral dilemma.
Some of the common questions asked in this initiative to ‘crowdsource’ morality were:
- Should the self-driving car run down a pair of joggers instead of a pair of children
- Should it hit the concrete wall to save a pregnant woman or a child
- Should it put the passenger’s life at risk in order to save another human?
The researchers then created an AI based on this data, teaching it the most ‘predictably’ moral thing a human could do. The initiative was led by a collaboration between Carnegie Mellon assistant professor, Ariel Procaccia and one of MIT’s Moral Machine researchers, Iyad Rahwa, who designed it to evaluate various moral situations that an AI can encounter.
Though it sounds like an interesting concept, how can the reliability of the machine based on crowdsourced morality be ensured? It couldn’t exactly be trusted for making complex decisions such as those around saving human lives. As experts believe, to decide on hundreds of millions of variations based on views of few millions couldn’t possibly be the best way. Professor James Grimmelmann from Cornell Law School had said, “Crowdsourced morality doesn’t make the AI ethical. It makes AI ethical or unethical in the same way that large numbers of people are ethical or unethical.”
In a similar effort, Germany released the world’s first ethical guidelines for the artificial intelligence of autonomous vehicles. Developed by the Ethics Commission at the German Ministry of Transport and Digital Infrastructure, the guidelines stated that self-driving cars must prioritise human lives over animals, whilst also restricting them from making decisions based on age, gender and disability.
Why Humans Are Obsessed About ‘Moral’ AI
In an earlier survey carried out by MIT, while many shared genuine views, there were others who agreed that though self-driving car should sacrifice its own passenger when faced with a calamity, they would not prefer to ride in the cars themselves. This ambiguity in their thoughts raises questions about how ethical AI system could actually be, if their (human’s) own opinions are disparate.
Being truly abstract in nature, teaching morality to AI — which could be done best with measurable metrics — is next to impossible. In fact, considering instances, such as the one mentioned above, it is even questionable if humans have a sound understanding of morality that all of us can agree upon. ‘Instinct’ or ‘gut feeling’ takes over precedence in many cases. For instance, an AI player can excel in games with clear rules and boundaries by learning to optimise the score, but it has to work harder when it comes to mind games such as Chess or Go. We have, however, seen in past instances where Alphabet’s DeepMind was able to beat the best human players of Go. But in real-life situations, optimising problems could be more complex.
For example, teaching a machine to algorithmically overcome racial and gender biases or designing an AI system that has a precise conception of what fairness is, can be a daunting task. Remember Microsoft’s AI chatbot that learnt to be misogynist and racist in less than a day? To teach AI the nuances of being ethically and morally correct is definitely not a cakewalk.
Can AI Be Moral
If we assume that a perfect moral system was to exist, we could derive this perfect moral system by collecting massive amounts of data on human opinions and analyse it to produce correct results. If we could collect data on what each person thinks is morally correct, and track how these opinions change and evolve over time and over generations, probably we could have enough inputs to train AI with these massive data-sets to be perfectly moral.
Though this gives a hope of building moral AI, since it relies on human inputs, it would be susceptible to human imperfections. Unsupervised data collection and analysis could in fact produce undesirable consequences and result in a system that actually represents the worst of humanity.
On A Concluding Note
Despite fears by the likes of legendary scientists Stephen Hawking, arguing that once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate, humans seem to be indulging in conversations around the importance of programming morality into AI. Tech giant Elon Musk has also time and again warned that AI may constitute a “fundamental risk to the existence of human civilisation”.
Though these fears seem only reasonable, it cannot be denied that there is a need for more ethical implementation of AI systems, with a hope for engineers to imbue autonomous systems with a sense of ethics. It would be only ethically correct to have a moral AI that builds upon itself over and over again, and improves on its moral capabilities as it learns from previous experiences — just like humans do.
Try deep learning using MATLAB