There have been several applications of artificial intelligence-based facial recognition systems which have mainly been used for security reasons. There have been many examples where the system has actually tracked down criminals — even in places with an assembly of many people.
Last month in August 2018, a man who was faking his identity at Dallas International Airport, Washington DC was caught by an AI facial recognition. This was the first time that the newly-installed facial recognition system at the airport was actually of help and could pin down an imposter. The system flagged that the man’s face did not match the passport, and when officers checked him thoroughly, it was found that he had his original passport of Republic of Congo hidden inside his shoe. There are eleven such airports in the US that are using this system.
In 2014, when a child sex abuse suspect from the US was hiding in Nepal for several years, an AI facial recognition system finally detected his whereabouts. The man had reportedly been living comfortably in Nepal with an assumed identity. The facial recognition system could match his ‘wanted’ poster with his duplicate passport. That is how the authorities finally found out about his location and arrested him.
Earlier this year, a fugitive in China was arrested from a crowd of thousands of people during a live show. The man was wanted for a number of undisclosed economic crimes and was listed in the national online system. Despite the show being crowded, the AI system could identify this person and match with its criminal dataset.
Facial detection is just the computer taking the seen face from the video as its input. This step involves detecting that the image seen is a face or not by training the system with features like eyes, nose, lips. The AI system is trained with sets of images that have faces in it and sets of images that do not have faces in it and conveying to it which are faces and which aren’t. Some of the methods are eigenface-based, distribution based, neural networks, support vector machines, a sparse network of winnows, naïve Bayes classifiers, hidden Markov model and inductive learning, among others. All these methods work on the same general idea of identifying whether the image is a face or not.
Face extraction involves noting down specific facial features after scanning it. If there is a large change in the entire scanning of the image, it saves it. Features like certain lines, corners, patches, moles and similar features like that are detected. Also, some distances like that between the eyes, nose length, the shape of eyebrows or cheekbones are recorded. It identifies over 80 nodal points on the human face and stores the resulting data as a faceprint.
This is the final step where the imposter or criminals are tracked down. This is where the face is matched with a name or an identity. Here, the system runs through all its pictures (or the passport picture in case of airport security) and tells whether the face matches or with whom does it match. It also finds characteristics which best describe the image. Algorithms used in this step are eigenfaces, local binary patterns histogram (LBPH), fisherfaces, scale invariant feature transform (SIFT) and speed up robust features (SURF), to name a few. All of these algorithms take the extracted image as the input and match it to get the true identity. The system does all of this within no time, saving a lot of time and efforts and providing accuracy at the same time.
Even though the facial recognition is a boon to track down criminals, it is faster and less burdensome to the officials at the security check, it has a long way to go to become 100 percent reliable in every field of its application. Just like the popular saying “Garbage in, garbage out”, the accuracy of the AI system depends completely on the quality and the quantity of that data that is used for its training. An outdated and sparse data will always give an unreliable result.
Amazon’s face recognition system, for example, had falsely accused 28 Congress members as criminals. The data was compared to a database of mugshots and the system gave a match. Another example is the Massachusetts State Police falsely identifying a driver’s license as fake due to a failure in their facial recognition system to identify another person.
There are several challenges faced that need to overcome, like pose variations, feature occlusion and expressions. The best view for the face detection is obviously the frontal view, but that is not always possible. Features like beards, caps and glasses can block facial features. The features can also vary because of facial gestures. All these factors do not favour a smooth detection. Also, if the camera carrying out face detection is of a poor quality, the entire system will be flawed. All of such problems need to be overcome for the AI system to be deployed in every platform and run successfully.
Try deep learning using MATLAB