In most of our articles, we have encapsulated developments in artificial intelligence and machine learning over the last few years. Many of the news and articles have been about the impact of these technologies on human lives, mostly in a positive way. However, not every progress could be deemed right and has become a subject of debate, mostly by comparing it with human thinking or conscience. This article, however, will explore AI with respect to human intellect and its interpretation of us.
Here’s a thought-provoking question: Have we fully comprehended AI systems? The answer is a resonating “no”. There is nothing solid when it comes to a serious understanding of AI. Even though extensive research has unravelled countless possibilities, somewhere we fail to have a complete knowledge of intelligent systems.
This brings us to Polanyi’s Paradox. In this article, we will delve into where we stand in our own understanding and that of AI’s perception.
What Is Polanyi’s Paradox?
Michael Polanyi, a Hungarian-British polymath wrote a book called Tacit Dimension in 1966 which explored the ‘tacit knowing’ in human knowledge. In the book, he goes on to tell that our ability to learn certain tasks is gained through the experience of doing them, and this cannot be explained by us in detail. For example, he says that we recognise faces without knowing how we do it.
The ‘tacit’ knowledge obtained here is different from explicit knowledge, which can generally be articulated easily. In light of this, sometimes our knowledge and capabilities are often beyond our perception. To summarise, the mantra “we can know more than we can tell” is the bottom line of the book, which forms the Polanyi’s Paradox.
Polanyi’s Paradox means that there is much more to our cognitive phenomenon which is beyond explicit knowledge and cannot be said expressed or pictured correctly.
Can AI Supersede Humans?
It is not surprising to see advancements in AI given the technology available today. In fact, it has led to a drastic, undesirable technological change for many. For example, menial jobs being replaced by machines or say automation, eventually displacing the need for humans. On the other hand, it has garnered awe from researchers in the form of innovations such as virtual assistants or self-driving cars.
The point here is that AI systems are outperforming humans in some aspects. But, will it go beyond these capabilities and emulate humans completely? The answer to this is “no” because AI systems have not yet perfected other cognitive actions.
AI also does not perform well at an overall intellectual level. It is devoid of human features like thoughts or emotions. If Polanyi’s Paradox is applied to AI, it becomes vague since the AI system is itself restricted to knowledge created by humans. Moreover, AI systems are predictable thus showing the cause for those actions unlike in humans where it cannot be determined.
In his paper, Polanyi’s Paradox and the Shape of Employment Growth, American economist David H Autor posits why the human factor is essential. “The challenges to substituting machines for workers in tasks requiring adaptability, common sense, and creativity remain immense. Contemporary computer science seeks to overcome Polanyi’s paradox by building machines that learn from human examples, thus inferring the rules that we tacitly apply but do not explicitly understand.”
Autor goes on to say that two approaches — environmental control and ML — can circumvent Polanyi’s Paradox significantly for machines.
“Humans naturally tackle tasks in a manner that draws on their inherent flexibility, problem-solving capability and judgment. Machines currently lack many of these capabilities, but they possess other facilities in abundance: strength, speed, accuracy, low cost and unwavering fealty to directions. Engineering machines to accomplish human tasks does not necessarily entail equipping machines with human capabilities; instead, work tasks can, in some cases, be re-engineered so that the need for specifically human capabilities is minimised or eliminated.”
A classic example would be object recognition of chairs. Algorithms are trained to identify chairs based on properties of a chair such as legs, arms, seat, back etc. Suppose, if these algorithms are implemented in a machine, it would identify chairs as per the algorithm criteria. But, not all chairs possess the same properties — no legs (beanbags) no back (stool) and so on. In this case, the machine would fail to recognise them as chairs whereas humans can tell that through knowledge — which cannot be ascertained as to why it happened that way. This is again Polanyi’s Paradox.
To summarise, AI systems are yet to come close to humans in terms of Polanyi’s Paradox. Consequently, overcoming this paradox in humans is still a debate going on. If successful, it might take a very long time for AI to completely emulate humans and counter paradoxical theories.