Autonomous experimentation is one of the most crucial debates involving the social and ethical implications of AI – in other words, let’s say what happens when machines learn by experimenting on us. A while back, in 2014, Menlo Park social media giant Facebook sparked an outrage when news broke out about the news feed experiment to control user’s emotions – users as “lab rats” – reportedly adjusting news feed, number of ads and photo size to develop a useful product.
Popular New York based dating website OKCupid revealed in 2014 that it experimented on human beings – by their own admission “if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site”. The dating website came up with some interesting observations – people just looked at the picture & is the match percentage a good predictor for relationships. Search giant is well-known for conducting live experiments on a small set of users to tweak its algorithm accordingly.
The research was in part, spurred by web companies like Google, Facebook, Amazon, eBay that frequently run experiments on a small set of users to tweak algorithms, improve the interface and release new products. User experiments have become part of the toolkit of large web companies to improve the interface, tweak algorithms and introduce new products.
Are humans the AI’s new lab rats for AI research?
According to this highly cited 2016 Microsoft research paper, large-scale experimentation on users is not new. However, driven by advances in artificial intelligence, novel autonomous systems for experimentation are emerging that raise complex, unanswered questions for the field, noted Microsoft researchers Kate Crawford & Hanna Wallach and their team. While some of the questions are computational – others are concerned with the social and ethical implications of these systems.
For long, computer games had been regarded as a testbed for testing and developing new techniques in AI. DeepMind’s AlphaGo & now AlphaGo Zero is a step in that direction. According to Peter Cowling & Peter Cowling, Professor of Computer Science, University of York and Sam Devlin, Research Fellow in Artificial Intelligence and Digital Games, University of York, games have always delivered a source of tough problems – feature well-defined rules and a clear target that is to win.
Here are a few reasons outlining how human input is increasingly used in AI research
1.) To reason like humans, AI needs complex, real-world situations which are trickier than just digital games. Which brings us to the question – are humans the next lab rats for AI? Well, our virtual brain already serves as a model in computational neuroscience.
2.) Much of the research by AI’s recent pioneers Geoff Hinton, Yoshua Bengio & Yann LeCun is modelled after the human brain (the backpropagation algorithm is inspired by the human brain, says the brain back-propagates via a multilayer neural network). Now, with the tantalizing possibility of superintelligence inching closer – interest in neuroscience & brain augmentation has reached a feverish pitch.
3.) Techno-sociologist Zeynep Tufekci emphasized in her TED Talk that given how machine intelligence is used for making day-to-day subjective decisions, it is highly likely to fail if it doesn’t fit human-error patterns — and it should be prepared for it. “We cannot outsource our responsibilities to machines,” she said.
Risks posed by autonomous experimentation
The researchers believe that autonomous experimentation is one area that hasn’t received enough attention. Going forward, seeing how AI is being increasingly deployed in subjective decisions, the way user experiments are conducted can have a major impact on outcome.
US Elections — data to change voter behavior: For example, the recent revelation by Cambridge Analytica about using data to change voter behaviour that saw Donald Trump taking over the Oval Office. From digging into Facebook data to influencing voter behaviour, Cambridge Analytica unknowingly exploited the social media profiles of a huge swath of American electorate.
The subsequent data leak, that has just come to the fore raises questions about data privacy and ethics. Reportedly, only a small fraction of the users had agreed to release their information to a third party. This large-scale personality profiling was an effective tool to sway the elections in the favour of Republican party. B building psychographic profiles on a large scale, the company was able to predict voting outcomes, political beliefs and areas that could be swung in their favour.
Tech giants are sitting on a huge corpus of data: Can data monopoly give rise to threat models? Given the huge monopoly tech giants hold in this space, Amazon, Google, FB, Baidu among other web companies have the power to leverage data to tweak user emotions, behaviour accordingly – this is the kind of threat model AI ethics researchers have been fearing. Crawford, who has repeatedly emphasized the need for more fairness and transparency in computational systems said at recent conferences that the industry is in need of an ethical wake-up call.
Tech giants are making major omissions in introducing fairness in AI systems: Facebook’s attempt to spot discriminatory ads met with mixed success. A ProPublica report highlighted how Facebook’s ad system promoted discrimination in rental housing by allowing users to exclude certain ethnic groups based on race and religion. Despite tweaking its advertising policies, the company failed to flag ads that promoted discrimination and racial bias.
Given the increasing importance of ethics in AI, New York University opened an AI Now institute in November 2017 to study the social implications of artificial intelligence. The core focus is to lend a sociological, philosophical perspective to finetune AI systems and enable the algorithmic systems to drum up objective results. In a rush to invent new products, tech companies like to conduct in-depth analysis to come up with a best strategy, but eventually this can lead to an ingrained bias.
Try deep learning using MATLAB