Artificial intelligence applications have spread to a large extent, regardless of the business domains. The cure for cancer, detecting climate change or making industrial inspection processes easy — AI has had a tremendous impact. The bad news is that all of this is seen only from a positive perspective without considering the ethical aspects. This is a bit of a concern since we still don’t know what AI phenomenon holds for the future of mankind.
With innovation largely driving technology, the possibility of ethics is sometimes be overlooked or ignored. Even major tech companies have realised this and are slowly bringing a code of ethics into the corporate picture. Now there’s a combined effort from large corporations in bringing AI ethics to fruition.
Transgression In AI
One of the biggest fears of AI is the possibility of it exceeding human capabilities. The fact that AI can go beyond human abilities raises a lot of question of ethics. In fact, at this point, AI systems may not draw a line between what is right and wrong. Hereafter they can become enigmatic contraptions. Capturing a comprehensive moral picture behind these ‘superior’ AI systems is quite difficult.
But for today, current AI progress has not reached the level of superhuman abilities. As experts believe, our generation might witness all these developments in the near future that may even be a ‘threat’ to the human race.
Having A Moral Status
Another issue that put ethics in the front is imbibing itself into the system. It depends on us humans rather than AI programs figuring out themselves. Merely putting the power of morality into the hands of AI is vague.
Philosopher Nick Bostrom explains, “It is widely agreed that current AI systems have no moral status. We may change, copy, terminate, delete, or use computer programs as we please; at least as far as the programs themselves are concerned. The moral constraints to which we are subject in our dealings with contemporary AI systems are all grounded in our responsibilities to other beings, such as our fellow humans, not in any duties to the systems themselves.”
Organisations which develop or research AI systems can be contrasted along these lines. After all, today’s AI is proceeding on the way to achieve a human-like capacity. The parallel of ethical behaviour in these systems should also be explored if they become mainstream in the tech ecosystem.
Companies Taking An Ethical Stand
As more companies bring in and implement AI tech, they have realised the repercussions in terms of human rights, privacy and ethics. Tech giants such as Google, Microsoft, IBM etc. have established standards and code with respect to ethics. Recently, even business software maker SAP brought in an ‘ethics code’ for AI research. As a matter of fact, way back in 2016 the Partnership on Artificial Intelligence (PAI) was created by these tech giants for safe, ethical and transparent AI.
As you can see, there is a trend on building ethics here. However, blending ethics for AI at an organisation level requires compromise at certain areas. For example, how far a company is willing to be transparent in terms of AI tech is yet to be seen. There is no certain measure that can do evaluate this for a large set of companies. This is just one part. Morally teaching AI systems could also sometimes set biases which is dangerous.
Therefore, the onus falls on AI research which not only has to bring breakthroughs in machine learning and so on, but also on shedding more light on ethical implications.
The Potential of Large Data
Humongous amounts of data are generated every day. Suppose, if AI systems become omnipresent. Will it catch hold of every data point? This could be a liability. With most tech corporations having data at their focal point, it could raise concerns regarding vulnerabilities and flaws leading to a disaster. To give an example, consider Facebook’s data breach that happened a few months ago. Had it fell into the hands of adversaries, it could have been even more devastating. In a scenario like this, hypothetically if AI systems were in place, it might have not been able to control data breach albeit identifying it.
In spite of the fact that companies strictly bringing an ethical code of conduct in AI, it should cover every aspect of AI. What I have said here is just the tip of an iceberg. There are plenty of issues lingering for AI to be seen equal to an ‘impartial’ human. If AI systems do become omnipresent tech (which it will) in the near future, it has a mammoth task of being ethically responsible.