The race to become a superpower in AI is a big temptation right now, what with every country wanting dominance in this sector through a technological breakthrough. Countries like China, US, Japan, Singapore, Canada and France among others, are making huge investments in AI research and development. But so far there is still no set of guidelines and standards in place for ethical research, design and use of AI.
The hot debate around the world right now is — can AI make ethically-sound decisions? Is there a need for regulation? If so, what kind?
Advances in artificial intelligence and machine learning have been heralded as a net benefit to all humankind. But thinking AI can inherently do good to the society overlooks the important research and development needed for ethical, safe applications. Right now, there is no transparency about data flow and there is no certification of AI safety.
There are some challenges related to the data used by machine learning systems. The large datasets needed to train ML systems are expensive either to collect or purchase due to which many companies, public and civil society bodies gets excluded from the machine learning market. Data may even be biased or error-ridden for classes of individual living in rural areas of low-income countries, or those who have opted out of sharing their data.
Even if the machine learning algorithms are trained on good data sets, their design or deployment could encode discrimination in ways like choosing the wrong model; building a model with inadvertently discriminatory features; absence of human oversight and involvement; unpredictable and inscrutable systems; or due to unchecked and intentional discrimination.
Additionally, engineers don’t totally understand how their own algorithms work. Irresponsibly designed and fed algorithm trained on a data that doesn’t represent the whole population leads to skewed results. Black box algorithms wherein though the input and outputs are available, the engineers are not able to discover the implementation process.
Where AI Has Gone Wrong In The Past
Recently, a group of DeepMind researchers found out that it is possible for artificial intelligence to act in an ‘aggressive manner’ when it feels that it is going to lose out. They ran a series of tests to find out how AI would react when faced with certain social dilemmas.
In 2015, Google Photos identified two African American people as ‘gorillas’. Brooklyn resident Jacky Alcine noticed that photos of him and a friend, both of whom are of African origin, were tagged under the label ‘gorillas’
Similarly, Microsoft’s AI chatbot Tay went on making racist, inflammatory and political statements an hour after its launch.
There are cases where bias is intentionally built into AI and machine learning algorithms and that will fundamentally affect people’s lives. If precautions are not taken now then they will have long-lasting consequences. For example, if employers want to avoid hiring women who are likely to become pregnant, they might employ machine learning systems to identify and filter out this subset of women.
Two multinational insurance companies operating in Mexico are using machine learning to maximise their efficiency and profitability, with potential implications for the human right to fair access to adequate healthcare. Now imagine a scenario in which insurance companies use machine learning to mine data such as shopping history to recognize patterns associated with high-risk customers, and charge them more, if that becomes the case then the poorest and sickest person will not be able to afford access to health services.
As per the WEF report, which surveyed 745 leaders in government, business, academia, non-governmental and international organisation named robotics and AI as ‘the technology emerging technology with the greatest potential for negative consequences over the coming decade.’
What Tech Companies are Doing to Make AI More ‘Ethical’
In 2016, tech giants like Google, Facebook, Amazon, IBM and Microsoft set up an industry-led non-profit consortium ‘Partnership on AI to Benefit People and Society’ to come up with ethical standards for researchers in AI in cooperation with academics and specialists in policy and ethics. And also to pacify public fears about the human-replacing technology. Later in 2017, other companies like Accenture and Mckinsey too joined the alliance.
Tech companies are also taking individual actions to build safeguard around their technology. Google-owned, London-based DeepMind last year had set up its new Ethics & Society committee which will conduct research across six “key themes” — including ‘privacy, transparency and fairness’ and ‘economic impact: inclusion and equality.
Microsoft’s internal ethics board for AI, dubbed AETHER considers things like new decision algorithms developed for the company’s in-cloud services. Although right now the board consists of Microsoft employees but in future, the company hopes to induct people from outside also.
Countries Trying To Make AI Ethical
This year, the Bureau of Indian Standards formed a new committee for standardisation in artificial intelligence to focus on standardising projects which revolve around cybersecurity, legal and ethical issues. The projects can be from any sector — IT, technological mapping or leveraging AI for national missions, among others.
UK’s House of Lords released a report to keep the robots in check. The report entitled, ‘AI in the UK: Ready, Willing and Able?’ called for the creation of an AI Council. The report suggests that the UK government should sponsor more basic research into AI and convene a global summit in London next year to create a ‘common framework for the ethical development and deployment of AI systems’.
“AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse,” Chairman of the Committee, Lord Clement-Jones, said.
In Canada, the Montreal Declaration on Responsible AI is trying to stimulate discussion on ethical guidelines, noting that AI should ultimately promote the well-being of all sentient creatures. The Treasury Board Secretariat of Canada is looking at the responsible use of AI in government. Global Affairs Canada is also leading a multi-university collaboration on AI and human rights.
The European Economic and Social Committee called for a code of ethics to cover the development, deployment and use of artificial intelligence, to ensure AI remains compatible with the principles of human dignity, integrity, freedom and cultural and gender diversity, as well as human rights. “We need a human-in-command approach to AI, where machines remain machines and people retain control over these machines at all times,” said rapporteur Catelijne Muller, EESC rapporteur Catelijne Muller said.
New York City launched a task force to become a global leader on governing automated decision making.
The ethical AI is achievable provided that there is more human, government and companies intervention. It is also important that humans are kept in a loop where factors are being unexpectedly overlooked. Fairness and the dignity of the people affected should be prioritised in the architecture of the machine learning system and its evaluation metrics. If machine learning systems are involved in a decision-making process that affects individual rights, this must be disclosed by the company or developers. The systems must be able to provide an explanation of their decision making that is understandable to end users and reviewable by a competent human authority. The developers must make visible avenues for redress for those affected by disparate impacts, and establish processes for the timely redress of any discriminatory outputs.