Applied deep learning has brought out wonderful applications. But at the same time it is becoming more and more evident that this AI algorithms can be misused for manipulating audio and video content. This is a dangerous trend and it threatens our structures of information and knowledge. Similar advancements like these are beginning to create trust issues between the public and artificial intelligence. It is difficult to design policies to reign in such swift advancements in technology, as studies show that slowing down the research in AI will help with the trust issues.
If video, audio and other content can be easily created out of nothing and from scratch, this doesn’t bode well for the society in general. The impact of such AI developments on justice systems and social media is immeasurable. An HBR article states, “Trust of AI systems will be earned over time, just as in any personal relationship. Put simply, we trust things that behave as we expect them to. But that does not mean that time alone will solve the problem of trust in AI. ”
Understanding The Current Scenario
There are always ways one can handle situations and have AI work for the betterment of the society. But the public will always look at media with suspicion, and will have a hard time in trusting video testimonies. Today majority of content online comes from individuals. It is not largely produced by companies or governments as it was before the invention of printing press. There is an argument that companies should have in place guidelines that tell us how to govern the AI development. Such policies may also restrict development of AI algorithms that harms the society in general.
Artificial Intelligence will certainly redefined work in many industries, it will also lead to creation of new industries, companies and jobs. When it comes to the protection of personal information, many of the same concerns that exist in today’s computer systems also apply to AI. It is true that AI systems will be more capable of uncovering net new information from personal information. The notion of an artificial general intelligence (or AGI) — an autonomous, self aware AI system with all human abilities including consciousness — is an extremely ambitious goal for which our scientific understanding is in a supremely early phase. But yet the danger exists because today’s incremental growth is also showing signs of damage and danger.
AI, Manipulating Media And The Consequences
The real dangers of AI in the future is overstated but in the short term, it is often understated. In the era of deepfakes, we need more introspection as to what digital technology means for the information age and judicial enquiry. Deepfake, a mixture of “deep learning” and “fake information”, is an AI-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos. The impact of Deepfake is that it can’t be distinguished whether content is targeted or genuine. The AI researchers say everyone should know how fast things can be corrupted and to address trust issues is a must.
It won’t be long before online content becomes so indistinguishable from reality to human senses. It is now easy to play with facial expressions in a video consisting of RGB colours. People’s voices and lip movements can be adapted to follow a decided script. Hence if these kind of AI tools become a commodity the intended (false) message can be communicated very easily. In all dimensions these kinds of technologies will be far more dangerous than the advent of fake news.
The invention of generative adversarial network (GAN), is a major scientific breakthrough that is seen as the biggest achievement since the backpropagation. The main idea of a GAN is learning a generative model for images/sounds by using an opponent detector model, which tries to distinguish between real and fake content. An impressively realistic demonstration of GANs was presented recently by NVIDIA; but many other learning and vision technical advances power the current progress.
Artificial Intelligence And Trust
Today’s AI has now already learned to predict well. Given the appropriate data we can predict a heart attack as well as crime. Many decisions require precise forecasts and AI already do a great job at forecasting. They can predict natural phenomena like rain and even locate future earthquakes. These systems are trusted more than humans.
Since trust is an important component in the connection between people and artificial systems, we need to study it carefully. We need to study the trends in trust between man and machine, in detail. In this regard, there is a get divide between AI optimists and AI sceptics. Each side uses present news and data to support their existing stand. This points to confirmation bias.
Some also believe that AI will also lead to a deeply divided society, based on whom the fruits of massive automation and AI goes to. People who are ill trained and ill equipped to bear the loss of jobs due to AI will be the ones who will suffer the most. This point in itself is a flash point where trust in AI systems becomes really important.
Steps To Make Sure The Public Trusts AI
There are some ways we can think about solving these issues that come up with the use of AI. One of the measures include having more and more of AI in our everyday products. If the general public has more contact and communication with products that have AI components. This kind of contact with AI will help us solve trust issues. Similar evidence also suggests the more people use other technologies such as the internet, the more you trust them.
Another approach can involve more and more people into the AI decision making process. A study showed people who were given a chance to participate in the algorithm building process were more satisfied. They genuinely believed that the algorithm was superior and became regular users of the service.
Opening up the “black-box” of machine learning can help build trust between man and the machine. If the administration knows more about how algorithms at Google, Microsoft, Apple, etc work can also help boost public sentiment of AI. Many big companies already release transparency reports about government requests.
Try deep learning using MATLAB