Machine learning-based analysis of human functional magnetic resonance imaging (fMRI) patterns have enabled many discoveries. The application of computer vision in neuroscience has also led to many approaches that have given us insights about how the brain works and how it carries out its functions. Research coming out of Kyoto University has shown that careful analysis of functional magnetic resonance imaging (fMRI) can enable the visualisation of perceptual content in our brain. In a way the research shows that we can project images from our brains analysing fMRI images.
The neuroscientists in their research, present a novel image reconstruction method, in which the pixel values of an image are optimised to make it a deep neural network (DNN) features similar to those decoded from human brain activity at multiple layers. They found that the generated images resembled the stimulus image that were shown to participants, both were natural images and artificial shapes. The model that the researchers trained with natural images, was successful in generalising to the reconstruction of artificial shapes. This fact itself shows that the model indeed ‘reconstructs’ or ‘generates’ images from brain activity and simply doesn’t match them. The results suggest that hierarchical visual information in the brain can be effectively combined to reconstruct perceptual and subjective images.
Visualisation : Greatest Challenge Of Neuroscience
The externalisation and visualisation of states of the mind is a challenging goal in neuroscience. Although decoding and encoding methods that could render human brain activity into images existed, they were not very effective. But those methods were essentially limited to image reconstruction with low level image bases, hence failing to combine visual features of multiple hierarchical levels.
The researchers present a novel approach, named deep image reconstruction, to visualise perceptual content from human brain activity. The researchers combined the feature decoding from fMRI signals and the methods for image generation recently developed in machine learning. The reconstruction algorithm starts from a random image and iteratively optimise the pixel values so that the DNN features of the input image become similar to those decoded from brain activity across multiple DNN layers.
In this process the resulting optimised image is taken as the reconstruction from the brain activity. There was also a deep generator network (DGN) as a prior to make reconstructed images similar to natural images. The decoders are trained to predict DNN features of viewed images from fMRI activity.
Stages of Experiments
The experiments that the researchers conducted were of 5 distinct stages :
- Training natural image sessions,
- Test natural image sessions,
- Geometric-shape sessions
- Alphabetic-letter sessions
- Mental-imagery sessions
The decoder were trained using fMRI data measured while subjects were viewing natural images. The trained decoders were then used to predict DNN features from independent test fMRI data collected during the presentation of novel natural images and artificial shapes and during mental imagery. And now the researchers sent the features to the reconstruction algorithm.
These reconstructions obtained with the deep generator network capture the objects’ dominant structures in the images. Further, fine structures reflecting semantic aspects like faces, eyes, and texture patterns were also generated in several images. These impressive results were confirmed with a previously published dataset, reproducing quantitatively similar reconstructions with those in the present study.
Effect of natural priors and multilayer networks on reconstructions
The effect of the natural image prior, with and without DGN was also measured. The results show that DGN based natural image prior is enormously useful and enhances perceptual similarity of reconstructed images. To understand the importance of having multiple layers of DNN a side study was also done. An independent rater was presented with an original image and a pair of reconstructed images, both from the same original image but generated with different combinations of multiple layers, and indicated which of the reconstructed images looked more similar to the original image. The assessment showed that reconstructions from a larger number of DNN layers were better rated.
The researchers made sure that the methods were not restricted within the specific image domain. They did this by testing whether it is possible to generalise the reconstruction to artificial shapes. This was a challenging task since the training is solely on natural images. The results show that artificial shapes were successfully reconstructed with moderate accuracy, 69.4% by pixel-wise spatial correlation, 92.3% by human judgment, indicating that the model indeed ‘reconstructs’ or ‘generates’ images from brain activity.
With the tests and experiments it was successfully proved that the approach could provide a unique window into our internal world by translating brain activity into images via hierarchical visual features. The researchers used signed-rank tests to examine differences of assessed reconstruction quality from different conditions. It was also found that emphasising high-level visual information in hierarchical visual features may help to resolve the ambiguity of luminance by incorporating information about semantic context.
This series of experiments in Kyoto developed a method to “see” inside people’s minds using an fMRI scanner, which detects changes in blood flow in the brain. According to the team at Kyoto University, this breakthrough opens a “unique window into our internal world“.
Try deep learning using MATLAB