Brain-to-brain interaction has been an idea of science fiction for a lot of years. For example, in X-Men comics Charles Xavier uses the Cerebro to find mutants; and Dr Will Caster works on a human to computer interface in the movie Transcendence. But now, the brain-to-brain interface is becoming a concept in the real world.
A recent study by Indian-origin researcher Rajesh PN Rao from the University of Washington showcases work on a brain-to-brain interface called the BrainNet. The research is focused on the area of collaborative problem solving and it is reportedly the first multi-person non-invasive direct brain-to-brain interface that could successfully enable interaction between three people.
There have been attempts in the past like the brain-computer interface called the BrainGate, an effort from a consortium of researchers from different Universities. Stanford University researchers have successfully created a brain-computer interface that helps people with paralysis to type via direct brain control at high speeds. Even Elon Musk is putting efforts into their brain-computer interface called Neuralink. But there has been no research so far which successfully empowers brain-to-brain interface for a group of people.
The BrainNet is a combination of two major components:
- Electroencephalography (EEG)
- Transcranial magnetic stimulation (TMS)
EEG is used to record brain signals and TMS to deliver information noninvasively to the brain. It manipulates brain activity by inducing electrical activity in specific brain areas. The system was tested on a game of Tetris on a volunteered group of fifteen people, who were further divided into five groups of three, among which one had the role of a ‘Receiver’ two were ‘Senders’.
Each group had to play a game of Tetris. The players had to decide whether or not to rotate the Tetris block to 180 degrees. The only decision to make was whether or not to rotate the block by 180 degrees to fit the pattern. The Receiver could only see the top of the screen, where there is an incoming Tetris block, not the bottom, so he could not alone decide whether it has to be rotated.
The two Senders could see the whole screen and had to make a decision for him. If the EEG picks up a 15 Hz signal from their brains, it moves a cursor toward the right-hand side of the screen and s a signal is sent to the Receiver to rotate the block. The Senders can control their brain signals by staring at LEDs on either side of the screen one of which is at 15 Hz and the other is at 17 Hz. A layer of electro-conductive plates was applied between a gold-plated electrode, from which the signals were acquired, and the participant’s scalp. Senders decisions were sent through a TCP/AP network to the receiver which were converted into a single TMS pulse delivered to the occipital cortex of the Receiver. In order to keep the data rate low, there was only a flash to decide whether the block of Tetris has to be rotated to 180 degrees or not. A flash for a yes and no flash for a no.
Each session had 16 independent trials among which half the times the block had to be rotated and half the times it should not be.
All the three were in different rooms and could only communicate through the brain-to-brain interface. They also tested if the receiver can make a decision on which of the two sender’s decision to trust, over the course of their brain-to-brain interaction.
We have several attempts of brain-to-brain communication possible only through the wonders of machine learning. Earlier attempts on an interface like this focused mainly on improving human communications and social interaction capabilities. This research intends their work to help in collaborative problem solving using a “social network” of connected brains. It is a breakthrough in the field.
The game of Tetris used as an experiment in this research was a very beginner level. The method can be applied to even more complicated systems with its improvement. The paper suggests that the number of Receivers can further be scaled up to multiple Senders as well. A future work might include a more robust system handling this interface and we will have science fictions soon converted into realities.
Try deep learning using MATLAB