From fake news to deepfakes, artificial intelligence enterprises and researchers are doubling down on tackling fake news, images and videos phenomena. Global tech giants like Adobe, Facebook and Google are investing in developing AI applications to counter the fake video and image menace flooding the internet. Over the last few years, the canvas of digital forensic applications has grown at a rapid rate. Interestingly, it’s not just digital natives like Adobe and Facebook who rely on these digital tools; law enforcement agencies, too, use this information for tackling digital crimes.
Given the social environment in India and the role fake images and videos can play in swaying public opinion, separating authentic images from tampered images is becoming increasingly challenging. There is a new breed of research that focuses exclusively on digital forensics to prevent people from falling prey to fake videos and images which are used by a section of people for unscrupulous business or political purposes.
Rise Of Fake Images/Videos On The Internet
- The recent advances in image and video editing techniques are now widely available.
- Some of the most common tampering or manipulation techniques used are — splicing, removing regions within an image, copy-move
- So far, image and video manipulation has only been restricted to spy agencies. But the proliferation of sophisticated tools has made it accessible to Internet trolls widely.
- With such simple tools, it is possible to recreate a false image by filling in image regions or generating a video from another person’s speech or even inserting objects into a scene with only rudimentary editing skills
- So far, standard supervised methods were effective for image and video forensics. However, researchers assert it is not possible to get all the manipulated images or videos to generate enough manipulated training data for supervised methods to fully succeed.
Adobe Research Doubles Down On Altered Images With New Research
According to reports, now Adobe is pushing machine learning to automate digital forensics tools. Talking about their research, Learning Rich Features for Image Manipulation Detection in a CVPR paper, researchers from Adobe and University of Maryland posited that the image manipulation detection is different from traditional semantic object detection because it is supposed to give more attention to tampering artefacts than to the image content, which suggests that richer features need to be learned. The researchers proposed a two-stream faster R-CNN network and trained it end-to-end to find out the tampered parts in the manipulated image.
This has led to the rise in forensic tech to fight fake media, news and image and video manipulation. The need for right digital information also necessitates the need for accuracy. Researchers from Berkeley traced how digital forensic methods are undergoing rapid change with AI researcher adding more teeth to applications. For example, earlier research showed a vast variety of visual forensics methods for detecting different type of manipulations. One of the earliest approaches dealt with using domain knowledge to isolate physical cues within an image.
It drew upon techniques such as signal processing, and focused on cues such as misaligned JPEG blocks, compression quantisation artefacts and camera-hardware fingerprints among other attributes, the Berkeley paper noted. According to the paper, self-supervised methods hold the most promise in digital forensics since it can be generalised to a wider range of manipulated images and videos. But for a forensic algorithm to be foolproof, it should also understand the actions of forgers that adapt to the detection algorithms. With new advancements in computer vision and image editing, there is a greater need for advanced techniques to detect fake images or videos.
Protecting AI Systems From Attacks With GANs
Today, adversarial training or more popularly known as GANs is widely used by industry’s leading tech giants like Facebook to improve the experience of users on social media platform and better predict what users want to see based on their previous behaviour. According to Yann LeCun, Facebook’s AI Chief, GANs (originally developed by Ian Goodfellow) are the most important work. GANs can work not only on image and text recognition but also higher-order functions like reasoning, prediction and planning, rivalling the way humans think and behave. “Adversarial training lets the system develop whatever it wants, as long as it’s within the set that the discriminator likes. This solves the ‘blurriness’ problem when predicting under uncertainty,” Lecun shared this via a Quora session. According to Goodfellow, GANs create compelling adversarial examples, which can be used to train AI systems to develop a robust defence system.
Take Our Survey