As private corporations with massive access to data grow more powerful, concerns about data privacy and ethics gain traction across the world. The question is now being posed to Governments about the extent to which humans and the society at large should trust decisions made with AI systems. The debate in the last few days has shifted from concerns surrounding the job labour market to the right form of governance of AI and to, more importantly, responsibly generating data from people’s behaviour to remaining accountable for the slew of autonomous software agents out there.
AI infringing on data privacy is nothing new – what’s rankling about Facebook controversy is the AI-backed statistical profiling to predict the behaviour of certain groups of people to achieve a certain electoral outcome. This current scenario implicates private organizations and governments across the globe working in collusion to use data to tweak and remodel user behaviour. The AI-infringement incident shines a spotlight on a burning question – with whom does the responsibility lie for data privacy and data breaches and how can government make the role AI plays in decision-making more transparent.
Why India needs a Code of Ethics in AI & Data Science?
In the case of AI causing harm, is there a possibility for redressal? While the Indian government is hard at work to establish alliances between academia, startups, industry players and government ministries to create jobs and raise the public understanding and trust around AI, it is also time the Government lays down a foundation for Code of Ethics in AI and also addresses the approaches to harm, negligence and liability which are largely left unaddressed. The government should also set up a system to provide a platform for compensation and redressal on account of algorithmic malfunctioning.
What are some of the core issues the Code of Ethics in AI should address?
1) Lay down the guiding principle for ethical treatment of data: Besides being a centre for redressal and establishing a system for liability and negligence, the committee should act as a guiding principle for ethical controls and lay down a framework on treatment of data especially for the public sector. Keeping in view how government tie-up with major organizations for well-funded data-led projects, the committee should lay down clear guidelines about emergent risks of use of data and how to treat data ethically.
2) Understand the ethical implications of using public data: Data-intensive projects requires access to public data which could pose a risk, in case of data breaches. The committee should set up guidelines on what data should be collected and how much access should be given to private organization. In addition to this, the committee should also investigate the ethical implications of utilizing public data, that can be manipulated or sold off by private companies.
3) Address the growing influence on data accumulation: One of the key questions the committee on the Code of Ethics in AI & Data Science should address is the growing influence private tech companies by accumulating data and data-related expertise. Should data-intensive enterprises be restricted in their use and sharing of data with third-party players? How should enterprises, for example in the Indian context, e-commerce companies and telecom players that are sitting on petabytes of user data handle their own data? The committee should develop a framework for data-intensive organizations on how they should handle their own data.
4) Lay down guidelines on accountability and social responsibility: One of the major, key takeaways from Facebook-Cambridge Analytica debacle is who is responsible for the data breach. While Facebook is drawing much of the flak and public ire over selling data to third party players, the social network giant is not the only company solely responsible for the current scandal. Data companies that have an ulterior motive are usually fronted by research organizations that harvest data from major players. So, in this equation , should companies also act as frontier for social responsibility.
5) Regulate areas where data should be used at scale: Is data the cure-all for all evils? Having data and using it at scale doesn’t necessarily guarantee making better informed decisions at the end of the day. While we are still grappling with policymaking at the nascent level, using AI or data science led approaches may not always guarantee the best results. There are many risks associated with data-intensive practices and keeping this in view, the committee should regulate areas where there would be tangible benefits from the use of data.
6) Address bias in data: A recent paper on Opportunities and Challenges for AI in India highlighted how India’s software industry are heavily skewed at all levels (Lannon 2013). Hence, there is a real risk that the AI to be consumed by the entire population will be produced with a strong male bias. While the gender imbalance can create undesirable long-term consequences, it can also reinforce strong disparities in the existing AI system. The committee on the Code of Ethics in AI & Data Science could face a deep challenge on how to rule out bias or discrimination in datasets.
Does India’s Aadhar pose a similar challenge?
India is not without its challenges – the recent furore over Aadhar highlights gaps in the biometric database which poses serious privacy concerns to every citizen. Even though UIDAI has repeatedly clamped down the news about Aadhar’s privacy concerns, internet campaigner Nikhil Pahwa has voiced concerns over the faulty design and implementation of India’s unique biometric system that can pose serious threat to our security and data protection. Perhaps the government should start by settling down the Aadhar debate.
Try deep learning using MATLAB