Of late, Microsoft has been flexing its AI muscle with big announcements such as key acquisitions in AI space, (two most recent being AI startups Agolo and Bonsai) and snapping up talent. Is the AI strategy driven by Microsoft CEO Satya Nadella’s vision, who sees greater coordination and collaboration in AI. And a rethinking is also on the cards, or so he proposes in his recent article for Slate. With AI becoming more pervasive, Nadella’s AI rethinking is more in lines with a general policy that is gaining steam in the light of recent AI blunders. I had mentioned some of the instances of error in one of my previous article about “Black Box” technique.
As a part of the rethinking bit, Microsoft is playing up collaboration between machines and humans, and “amplified” intelligence, rather than simply discussing the benefits and harms of AI, . “The beauty of machines and humans working in tandem gets lost in the discussion about whether AI is a good thing or a bad thing,” writes Satya Nadella, CEO, Microsoft.
With tech giants like IBM, Google, and Facebook already sweeping the space, Microsoft too joined the AI bandwagon. The firm achieved this in 2015, by releasing Azure Machine Learning, a cloud-based analytics tool, part of its Cortana Intelligence Suite.
Microsoft CEO’s 10 AI rules
Through his blog, Nadella pointed out 10 essential rules to approach AI. These rules mostly reflect upon the goals and principles of an industry, or a society that helps in approaching AI. Nadella’s AI ruleset also contains few “musts” for humans, which will help in planning, prioritizing, and cultivating the skills needed for the future.
Some of Nadella’s rules or AI have been distinguished from others on very fine parameters. The first six rules on the list discusses what an ideal AI should contain or do. The remaining four rules stresses on few important attributes we must incorporate.
Let’s have a look at the 10 rules proposed by Nadella to define AI:
Rule 1 – AI must be built to aid humanity and preserve our autonomy: Concerns for human autonomy will be witnessed in general, as more autonomous machines are being built. To protect human workers, collaborative bots must partake in dangerous activities, such as mining.
Rule 2 – AI must reflect transparency: AI empowers machines to know about us, however, it’s equally important that humans understand he intelligible machines as well. In other words, humans must be aware of how the technology works, and the associated rules. Moreover, it’s essential we have a understanding of how the technology analyzes results and its impact.
Rule 3 – AI must enhance efficiencies without destroying the dignity of people: The technology should preserve cultural commitments to drive diversity. This can only be possible with a broader, deeper, and more diverse engagement of populations in the design of these systems. Moreover, the tech industry shouldn’t dictate the values and virtues of this future.
Rule 4 – AI must be designed to address the need for intelligent privacy: There are various sophisticated protections available in the market around us. They are designed to secure personal and group information, in ways that earn trust.
Rule 5 – AI must reflect algorithmic accountability: Humans can undo unintended harm leveraging algorithmic accountability. These technologies must be designed in a manner that they can account for both expected and unexpected scenarios.
Rule 6 – AI must prevent bias: It’s equally important to ensure proper, and representative research for AI. This helps in preventing the use of wrong heuristics to discriminate.
Rule 7 – Need for empathy: This attribute could be considered critical to approaching AI, and is difficult to replicate in machines. Empathy will occupy a valuable spot in the human–A.I. world, helping us collaborate and build relationships, besides perceiving others’ thoughts and feelings.
Rule 8 – Need for education: Investment for AI education must increase, as it will be instrumental in creating and managing innovations. This will also help us in achieving higher level thinking and more equitable education outcomes. It’s usually a difficult social problem to develop the knowledge and skills needed to implement new technologies.
Rule 9 – Need for creativity: Creativity is one of the most coveted skills humans possess. This trait isn’t expected to change much within the years to come. However, machines will continue to enrich and augment our creativity.
Rule 10 – Focus on Judgment and accountability: Humanity has reached a level where it can gladly accept a computer-generated diagnosis or legal decision. However, we still expect a human to be ultimately accountable for the outcomes.
Transformations in rules governing AI, over time
Even though we have reached a crucial point from where AI will only progress exponentially, the fear against these highly autonomous advanced machines is far from gone. Back in 1942, the renowned fiction writer, Issac Asimov had proposed three essential rules, also called “Three Laws of Robotics” to answer this concern. These rules stressed on the importance of safe and responsible AI development.
These rules were the only ones known to humankind until recently. The Future of Life Institute brought together several researchers, engineers, programmers, roboticists, physicists, economists, philosophers, ethicists, and legal scholars to forge a comprehensive set, featuring 23 core AI principles. This event marked a significant chapter in the history of Artificial Intelligence.
And now, Nadella through his blog introduced 10 new rules governing AI. Nadella’s rulebook, as stated earlier stresses on the significance of human-machine collaboration, rather than just discussing the fear and benefits concerning AI. As the AI landscape matures and expands drastically today, it won’t be a massive surprise if more newer rules are introduced for AI in the near future.
The fear against approaching ‘Singularity’
AI holds a lot of promise for the world we live in. Today, the technology is being used to treat diseases, or eradicate ignorance and poverty. The era of ‘Singularity’, when computer intelligence surpasses human intelligence might soon arrive. But are we prepared for it?
Realizing ‘Singularity’ will entail a bold and ambitious approach. Researchers proactively probe the landscape to make note of the minutest details that might affect humanity in an adverse manner. The rules are a way to show us the right path to approaching AI research and its development.