Progress in neuroscience has been historically limited by the size of multimodal datasets. There is a popular notion that producing large, multimodal and complex datasets will help us dig deeper into our minds. This particular notion has encouraged neuroscientists to use techniques in machine learning and data analysis to gain fundamental insights about the brain as well as the nervous system. Enormous datasets such as these are still out of reach for most. Researchers have therefore tried to address this issue by studying the microprocessor as a model organism, and use today’s advancements in neuroscience to see if these methods really work.
Microprocessors, like our brains are information processing systems which are both complex and which we understand at all levels. Although microprocessors are artificial, we can still use the modern neuroscience tools to test and understand if they work on a different information processing system. But a paper by Eric Jonas from University of California, Berkeley, and Konrad Kording from Northwestern University, Chicago, suggests that neuroscience does not have the necessary tools to produce a meaningful understanding of the microprocessor. Hence, the researchers put forward an argument for microprocessor as a validation platform for scientists using complex dynamical systems.
Microprocessor As A Validation Platform
Many advanced techniques have been developed by neuroscientists for studying neural systems. This has ushered in a new era where the experiments have been dominated by big data neuroscience. Big data has been used to reconstruct brain activity, simulate computation and other tasks. That’s why researchers Jonas and Kording, tried to understand if these neuroscience tools could be used to evaluate and know more about the complex hierarchical structures of a microprocessor.
Kording and Jonas tried to analyse the connections of the chip, the effects of destroying individual transistor, local activities and other recordings. They found that many measures are very similar between the brain and the microprocessor — and yet the unlimited data that could be collected through a microprocessor couldn’t help neuroscientists explain all its information processing prowess. This result can be taken to be a proof that given today’s methods, even with unlimited amount of data, we are still not able to understand the brain very well.
An Engineered Model Organism
The researchers used a MOS 6502 (and the virtually identical MOS 6507) which were the processors used in the Apple I, the Commodore 64, and the Atari Video Game System (VCS). A team then reverse-engineered the microprocessor from physical integrated circuits. Similar to the nervous system study approaches, a hybrid combination of algorithmic and human-based approaches were used to label regions, identify structures and build a accurate model for the processor.
These reconstructed models were so good that they could run classic video games and they also generated around 1.5 GB/sec of state information — which meant that realtime big data analysis of the processor could be done. These video games can be seen as animals and the activity they produce is really interesting. The activity of the reconstructed models turned out to be so rich and detailed that they seemed viable for testing with the best of neuroscience approaches.
Researchers created activities using Donkey Kong game
The games tested out were Donkey Kong (1981), Space Invaders (1978), and Pitfall (1981). The researchers claim that they were testing the “naturalistic behaviour of the chip,” which is not very different from studying the naturalistic behaviour of specific parts of the brain. Obviously, there are many differences between the a brain and a microprocessor. But it was easier to analyse the microprocessors because it has clearer architecture and far fewer components. On the other hand microprocessor is deterministic whereas neurons exhibit random behaviours as well.
Understanding A System Clearly
The processors can be used to test our understanding of the system. With so many neuroscientists trained in software engineering and computer science, most neuroscientists may have better intuitions about the workings of a processor than about the workings of the brain. The researchers channel this computational thinking approach into following questions:
- What is the problem it is seeking to solve via computation?
- What are the characteristics of the underlying implementation (in the case of neurons, ion channels, synaptic conductances, neural connectivity, and so on) that gives rise to the execution of the algorithm?
Researchers argue that getting an answer to the question, “How does a processor compute,” makes it easy to evaluate how much we learn from an experiment or an analysis. So when would we be happy with an explanation of the working of the system? In this particular case, the processor contains algorithms and a specific implementation. The heart of the processor, known as the arithmetic logic unit, made up primarily of binary adders which are made up of AND/NAND gates, which in turn are made up of transistors. This is not very different from a hierarchy of regions, circuits, microcircuits, neurons, and synapses which constitutes our brain.
Thorough analysis of the data was done along many lines and some insights pointed to better use of dimensionality reduction as a tool. One of the approaches, Connectomics, when used, led to “superficially impressive” results but still could not get anywhere near an understanding of the way the processor really works. Lesions studies allow the study the causal effect of removing a part of the system. It was found that some of the transistors are important while others are not for a given game, which is only indirectly indicative of the transistor’s role and is unlikely to generalise to other games.
The researchers concluded, “The microprocessor may help us by being a sieve for ideas: good ideas for understanding the brain should also help us understand the processor. Ultimately, the problem is not that neuroscientists could not understand a microprocessor, the problem is that they would not understand it given the approaches they are currently taking.”