As artificial intelligence ascends the height of human capabilities, there are parallel studies that are exploring the reasons for them. Scientists and AI enthusiasts alike are digging deep into the uncharted territory of AI’s actual intelligence by comparing it to the human mind.
As far as studies concerning the mind’s abstract thought process go, many approaches either have a philosophical perspective or a psychological one. However, these perspectives are not fully applicable to AI systems as they lack two significant elements — emotions and a human’s development process. Abstract thoughts stem from both these elements and this phenomenon is now explained through AI. Areas in AI like deep learning could hold the key to uncover abstract thinking if it is realised fully.
Deep Learning And Abstraction
Dr Cameron Buckner, a cognitive scientist and faculty at the University of Houston, postis in his recent paper about how the exceptional performance of a convolutional neural network is rooted through a hierarchical processing that is derived from sensory experiences gained as per the empiricism theory. He labels this entire flow as ‘transformational abstraction’.
“Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to“nuisance variation” in the input. Reﬂecting upon the way that DCNNs (CNNs) leverage a combination of linear and non-linear processing to efﬁciently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind.”
This powerful ability of a neural network is what makes it an ideal choice to recognise abstractions. As you can see in this example, empiricism is merged with the working of a CNN to establish both a philosophical as well as a psychological perspective. However, Dr Buckner suggests that CNNs face setbacks along three factors
- A weak unsupervised training model
- Cell type in neuroanatomy
- Adversarial examples
Unless these are minimised, abstract thought analysis with neural networks is difficult.
Only further research can tell how this area can scale abstract thinking. As of now, current deep learning has limitations such as massive data requirements to fulfil complex tasks. On top of this, deep learning itself is quite unknown in terms of how it works. Shedding light on this uncertainty might help with deciphering human thoughts.
A Mathematical Connection
The neural network example is just one side of the equation. What if abstract thoughts are mathematically manipulated? In fact, mathematicians even thought that the standards in the subject were mostly conceived based on intuition. This interlink between mathematics and thinking can also be used to capture abstractions too.
Thus, the whole concept of abstract thinking not only stands on philosophical or say even psychological perspective but also includes a mathematical side. This is why abstraction is challenging in AI.
Experiential Knowledge Plays Crucial
Earlier we mentioned that how abstract thoughts in humans also comes due to emotions and experience in the overall developmental processes. So, as we grow, we accumulate a dearth of experience and react with different emotions. Just like we learn new stuff, experience too, counts as knowledge. This is exactly where it becomes a pitfall for AI. Decoding abstract thoughts can go wrong if AI doesn’t realise collectively take human experience along with knowledge. It should differentiate between a regular computer programme.
Sociologist Brent Cooper, succinctly tells how the abstraction from the computer world should be isolated if AI is to be realistic.
“The whole point is that abstraction is ubiquitous and something that absolutely everyone on the planet does; its part and parcel of thinking itself. But most people do it poorly, intuitively, and take shortcuts in thinking. Abstraction is explicitly a technique to zoom out to the big picture. Computer scientists, sociologists, and the public all still have a lot to learn from each other in this regard.”
All in all, AI is bound with a lot of intuition-based challenges on top of its own structuring. Be it the mathematical link, experiential knowledge or deep learning’s light on understanding, these factors are just the tip of an iceberg of challenges in building a ‘thinking’ AI.