Artificial Intelligence (AI) has not only riled up the likes of tech visionaries Elon Musk, Stephen Hawking but also raised concerns among policymakers from UN and the Government that are concerned about the rise of AI and how to regulate it. Regulating AI is one of the most hotly contested debates that’s breaking the internet with lawyers, technologists weighing a man-machine future and mulling over AI law. Few questions doing the rounds are how to make AI accountable for its own actions, how to regulate non-human behavior and how to tackle “data-based monopolies of large enterprises”.
Recently, the UK Government launched an inquiry into artificial intelligence and its economic, social and ethical implications. Key highlights were how to improve the public understanding of AI and in which situations should the opacity (black boxing) in AI systems be acceptable among others.
In the Indian context, the country already has the three key building blocks of AI development — data, algorithms and computing power. Advancements in the Indian IT industry provided the country a solid “base for an AI take-off”. Saurabh Kumar, Adjunct Professor at the National Institute of Advanced Studies, Bengaluru wrote India is making a good attempt to be “ahead of the curve” in the AI, robotics, nano-technology and usher it in pro-actively.
AIM weighs in why there should be a regulatory framework for future AI-led innovations:
Which jobs should be robotized
Major Indian tech companies are driving significant investment in AI development and R&D efforts. The effects of automation can already be seen in manufacturing, automotive and the IT industry is reeling from retrenchment. In other words, AI systems have already been developed for commercial use. However, the reigning sentiment amongst industry experts is that the future will be Man + Machine rather than Man vs Machine as AI systems will boost productivity.
Vision: Here’s where the robotization of jobs can be done. When we last spoke to Atul Jalan, Founder & CEO of Manthan Software Services, the CEO pointed out how AI should be used to weed out low-level jobs in Indian setting on grounds of humanity and parity. Hazardous jobs such as manual scavenging, clearing sewage pipes and handling hazardous material in hospitals can be effectively robotized. The Government, in partnership with private robotics players can pilot projects to gauge the efficacy and success of such projects.
Assess the near and long-term risks of AI systems
With businesses relying more and more on AI systems for medical diagnosis, legal aids, assessing credit worthiness and dishing out insurance advice and now AI-controlled cars. In light of recent developments, the risks of AI decision making should be weighed carefully. Let’s not forget the recent research on racist and biased AI system. It was long believed that algorithmic-led hiring could sidestep prejudices and lead to equal opportunities for men and women. However, research pointed out it also had a negative impact with eligible candidates ruled out at first screening.
Vision: Government bodies feel the need for a study on the long-term risks of AI systems to ensure AI has a beneficial social, economic impact. Also, government bodies need to consider that the AI system’s goals are aligned with the framework
AI education — nurturing a modern sensibility in society
During the last one year, AI has moved from the fringes to become a ‘new normal’. It is part of everyday life and everyday conversation, in interactions with Alexa and Google Home. For most consumers, Siri the voice-based assistant was the first brush with AI. Robots such as Softbank Corporation’s Pepper and the Indian banking bot Lakshmi launched by City Union Bank serve as a first interface for customers. Robots for eldercare and snackbots have taken off in markets like US and Canada.
Vision: As AI becomes more powerful and evolutionary, one of the key points to consider is how will people react to robots and what are the practical and ethical considerations of mainstreaming robots. Isaac Asimov laid down the “Three Laws of Robotics”; one of the laws is that no robot should harm any human being. A proposed theory is hard-wiring rules into bots to teach them to act ethically. Theorists believe that implementing an ethical framework into AI system is very challenging, when it comes to translating it into computational definition.
Laying down the framework for cognitive robotics
A popular, up and coming field Cognitive Robotics is about a better understanding of human cognition. Working in conjunction with cognitive scientists and psychologists, AI researchers are working on autonomous systems that have human intelligence and reasoning skills. Through algorithms and autonomy models, scientists are developing AI systems that can perform tasks which are beyond the reach of humans. Case in point – the Mars Exploration Rover that gathers data and sends it back to NASA is an autonomous system.
While AI may seem intimidating, with threats surrounding humanity, the mass market arrival of AI is inevitable. From detecting cancer to replacing human drivers, AI will soon become pervasive in everyday life. The dire warnings of technologists Elon Musk and Stephen Hawking should only sharpen the focus on a regulatory framework that can govern AI innovations and imbue ethics in an AI system. The first step has already been taken in this direction with tech behemoths Apple, Facebook, IBM and Google forming a Partnership for Artificial Intelligence in 2016. AI should be viewed in the same vein as similar past technological revolutions that created more high-level jobs.