Pareidolia is a widely documented phenomena in humans. It is the tendency to identify faces in random things. For example, the two photos above: one of young Vladimir Putin and the other of Home Alone actor Macaulay Culkin.
This tendency doesn’t stop at faces. People with any trace of imagination can make things out of passing clouds. Not that a cloud resembles a specific object because a cloud can mean different things to different people unless it is specified. Though not this sophisticated, machines too, rely on pattern-finding algorithms to classify objects.
The level of human’s perceptual judgment of image similarity relies on rich internal representations ranging from low-level features to high-level concepts, scene properties and even cultural associations.
To check whether machines can crack this concept of visual models, a new dataset dubbed Totally-Looks-Like (TLL) has been introduced. Named after a popular entertainment website, this dataset contains visually similar images paired by humans.
Improving Perceptual Judgement Of AI
Totally-Looks-Like (TLL) dataset contains 6016 image-pairs from the wild to cover the diversity at which humans operate. The researchers at York University, Canada, conducted experiments in an attempt to reproduce the human level of matching images in machines through the features extracted from state-of-the-art deep convolutional neural networks.
To test the degree of similarity which machine-learned representations are able to reproduce the human-generated pairings, TLL had been used for the experiments. In a human-generated pairing, the left( L) and right(R) sides of the image are taken as two components or two points in a high dimensional space.
The distance between L and R is measured. All these measured values are sorted with a ranking. This method serves as a metric to measure overall performance.
Each image pair < L, R> appears on the website as a single image showing horizontally concatenated, of constant width of 401 pixels and height of 271 pixels. The last column of each image is discarded and is split equally to left and right images. In addition, the bottom 26 pixels of each image contains for each side a description of the content.
A face detector is run on all images, recording the location of the face. For each detected face in each image, features are extracted using a deep which was specifically designed for face recognition.
Visualising results from nearest neighbours revealed that there is duplicate (or near-duplicate images) within the L and R image sets. Virtually all images below a distance of 0.1 were near-duplicate, so a threshold is set to avoid accidental duplicates. For faces, the threshold was set to 0.5
The authors believe that sufficiently generic visual features should be able to reproduce the same similarity measurements without being explicitly trained to do so, just as humans do.
Learning through representations is a hot topic in the field of machine learning with a lot of room for improvement. Experiments such as above prove how far the machines are from replicating humans even at something as fundamental as image similarity. But like any other machine learning model, this too can be only as good as the data it is fed with. In future, if the diversification of datasets is firmly established, then the results will be more promising. And, any chance at training the machines to replicate humans at vision task can’t be ignored as most of applied AI deals with object recognition. From unlocking phones to driverless cars, it is ubiquitous and is here to stay.
Know more about TLL here