Artificial intelligence presents a great opportunity to transform the humans as a civilization for good. The availability of large swaths of data and enhanced computing architecture have helped in realising this dream. AI rose to prominence when the researchers showcased how AI can find applications in healthcare, education and security. Apart from the technical challenges, AI researchers are now addressing another key aspect that needs more focus: the ethics of applying artificial intelligence and where to draw the line.
Here are we list some of the top industry experts who expose the contingency of categories we often take for granted, and provide a foundational resource for understanding, critiquing, and contesting the AI systems that are currently automating classification across core social domains:
Alex Stamos is a cybersecurity expert, business leader and an entrepreneur who is working to improve the security and safety of the Internet through his teaching and research at Stanford University.
Before joining Stanford, Stamos served as the Chief Security Officer of Facebook.
In this role, Stamos led a team of engineers, researchers, investigators and analysts charged with understanding and mitigating information security risks to the company and safety risks to the 2.5 billion people on Facebook, Instagram and WhatsApp.
Follow Stamos here.
Brad Smith is the President and Legal officer at Microsoft. Smith joined Microsoft in 1993, and before becoming general counsel in 2002 he spent three years leading the Legal and Corporate Affairs (LCA) team in Europe. He leads a team of more than 1,400 business, legal and corporate affairs professionals working in 55 countries. Smith plays a key role in representing the company externally and in leading the company’s work on a number of critical issues including privacy, security, accessibility, environmental sustainability and digital inclusion, among others.
Follow Smith here.
Crawford is a principal researcher at Microsoft, a distinguished research professor at NYU and the co-founder of the AINowInstitute at NYU. Her recent publications address data bias and fairness, social impacts of artificial intelligence, predictive analytics and due process, and algorithmic accountability and transparency.
Follow Crawford here.
Is a former YouTube engineer, founder of AlgoTransparency, and worked with the Wall Street Journal and Guardian to investigate YouTube. Algo Transparency tries to create awareness amongst the users of YouTube on how the recommendation algorithm might distort the truth and misguide the watchers.
Follow Chaslot here.
Tufekci is a professor at UNC School of Information and Library Science. She is also a faculty associate at the Berkman Klein Center for Internet and Society at Harvard University. Her articles on internet security and censorship appear frequently in The New York Times and MIT Technology Review.
Follow Tufekci here.
Narayanan is a Princeton Computer Science Professor who studies digital privacy, infosec, cryptocurrencies, blockchain, AI ethics and tech policy. Watch his FATML tutorial, 21 Definitions of Fairness.
Follow Narayanan here.
Danah is a Principal Researcher at Microsoft Research and is also the founder of Data & Society. For over a decade, her research focused on how young people use social media as part of their everyday practices.
She collaborates with amazing network of researchers to work on topics like media manipulation, the future of work, fairness and accountability in machine learning, combating bias in data, and the cultural dynamics surrounding artificial intelligence.
DiResta is a Director of Research at New Knowledge, Head of Policy at nonprofit Data for Democracy. She is a 2017 Presidential Leadership Scholar, a Staff Associate at the Columbia University Data Science Institute, a Harvard Berkman-Klein Center affiliate, and is a Founding Advisor to the Center for Humane Technology. She investigates the spread of malign narratives across social networks, assists policymakers in understanding and tweets about the same.
Follow DiResta here.
Rachel Thomas was selected by Forbes as one of 20 Incredible Women in AI, earned her math PhD at Duke, and was an early engineer at Uber. She is a professor at the University of San Francisco and co-founder of fast.ai, which created the “Practical Deep Learning for Coders”
Her tweets consist of wide range of important topics with respect to AI. They usually contain both updated research work and also the shortcomings in few researches like shown above.
Follow Thomas here.
AI systems are systems of classification. In brief, they ‘learn’ what they know from data, and they use what they learn to classify what they ‘see.’ Everyone can benefit from a basic understanding of the pitfalls in contemporary AI systems. And these industry experts contribute to this space by exposing and bringing sophisticated unpopular opinions to the fore.