A group of researchers from University of Kyoto in Japan have figured out a way where computers can visualise what you are thinking! Any guesses for how they have achieved it? Its—artificial intelligence.
As a part of the study published by Guohua Shen, Tomoyasu Horikawa, Kei Majima and Yukiyasu Kamitani, they have used artificial intelligence to decode thoughts.
Machine learning has been used previously to study brain scans or analysis of human functional magnetic resonance imaging, where it could also generate visualisations of what a person is thinking when referring to simple, binary images. These images are limited to the reconstruction with low level image bases.
With the recent development, the researchers have found a way to “decode” thoughts using deep neural networks or artificial intelligence. It could allow scientists to understand more sophisticated “hierarchical” images having multiple layers of color and structure—for instance, a picture of a bird or man wearing cowboy hat.
According to the paper published, the visual cortical activity can be decoded into hierarchical features of a deep neural network for the same input image, providing a way to make use of the information from hierarchical visual features. They came up with novel image reconstruction method where the pixel values of an image are optimized to make its DNN features similar to those decoded from human brain activity at multiple layers.
“We found that the generated images resembled the stimulus images (both natural images and artificial shapes) and the subjective visual content during imagery. While our model was solely trained with natural images, our method successfully generalised the reconstruction to artificial shapes, indicating that our model indeed ‘reconstructs’ or ‘generates’ images from brain activity, not simply matches to exemplars”, noted the researchers.
Deep image reconstruction: Natural images (seen images), GIF version
Left: Seen images
Right: Images reconstructed from brain activity (being optimized) pic.twitter.com/YY0ZDxi7T5
— ‘Yuki’ Kamitani (@ykamit) January 4, 2018
“Our previous method was to assume that an image consists of pixels or simple shapes. But it’s known that our brain processes visual information hierarchically extracting different levels of features or components of different complexities”, they said.
During the research which was carried over 10 months, three subjects were shown natural images such as photos of a bird or person, artificial geometric shapes and alphabetical letters for varying lengths of time. The brain activity was measured while the subject was looking at one of 25 images. Once the brain activity was scanned, the computer decoded the information to generate visualisations of subject’s thoughts.
There are many potential applications, once the technology develops visible improvement. It can allow to draw pictures or make art simply by imagining something, it can visualise human dreams, hallucinations of psychiatric patients and much more.
Try deep learning using MATLAB