We have seen how developments related to artificial intelligence have grown over the past few years. For example, your smartphone is a good example. Personal assistants like Siri, Alexa, or Google Assistant which are present in most of today’s smartphones, outsmart the definition of ‘smart’ in these devices. Why? This is because they are driven by AI-related technology. In fact, day by day, these assistants see improvements some way or the other to provide an enriching personalised experience.
As evident from the example, AI systems are strongly built from logical ideas and information — just like computers. Researchers have been exploring all facets of the ‘intelligence’ part ever since the inception of AI. Subjects like reasoning and logic in AI is a hot topic. In this article, we see the role of analytical reasoning in AI.
Logic drives AI systems; in some instances logic communicates with AI implementation, and in others it completely dominates in AI. If we take the case of personal assistants that we mentioned above, it takes information from users’ to decide the next action or response logically. Information is key here since anything new or irrelevant fed to the AI overrides logic present in this system.
But can AI systems make sense of information? Yes, of course, as you can predict, machine learning is the answer. However, ML itself has to be taught to establish patterns and seek insights from data. For now, it does not deduce through analytical reasoning, which is evident in us humans.
We analyse information and come to conclusions (not always correct!) even when we don’t have knowledge of the subject (through deductive reasoning, part of analytical reasoning). Imparting this on an AI system is very difficult since it requires that it emulates the human mind itself.
Emulating Human Mind
In order for AI to be analytical, it has to entirely grasp the nuances surrounding human mind. Although AI areas like ML, deep learning or even natural language processing (NLP) have the ability to master these for a machine, it will definitely lack the power of human intuition. Why? In an earlier article, we talked about how machine can bring out solutions with reasoning, but only with pre-set logic and inferences. It will just take information and process with no actual ‘thinking’. On top of this,it is still yet to be seen why AI behaves this way along these aspects.
In the early 2000s, a hypothetical (and controversial) technique called ‘mind uploading’ emerged where AI emulated our thinking wholly. Many research studies surrounding brain emulation came out. However, most of them fixated to just being theoretical rather than looking towards development.
The essence here is, for a machine to be capable of analytical reasoning, it should expect to deduce without extensive knowledge. For now, AI systems are still a long way to reach this aspect.
AI guru John McCarthy rightly said, “We shall therefore say that a program has common sense if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows.”
Can Machine Really ‘See’ Through Data
Generally, analytical reasoning in humans is done through observing things like pattern, data, or text. For example, if we see looming dark clouds, we come to the conclusion that it is going to rain sometime soon. Even machines can predict this, but only with information fed to them (trained on datasets containing pictures of dark clouds). On a real-time basis (like online algorithms), it may not comprehend the impending rain.
Machines seem to fail here because it does not realise other facets of reasoning, cognition and perception for dynamic situations such as the above. Contrastingly, believe it or not, machine reasoning is growing today, and researchers are trying to infuse contexts apart from just data. In the near future, machines can ‘see’ what is happening just like we do.
Bringing reasoning fully in machines has been the prime focus these days. One study by Google’s DeepMind has even explored trying reasoning in neural networks inspired by IQ tests. Speaking on their model, the researchers say, “When we vary the way in which the test questions and training data differ, we find that our model is notably proficient at certain forms of generalisation, but notably weak at others. We further show that the model’s ability to generalise improves markedly if it is trained to predict symbolic explanations for its answers.”
As mentioned earlier, context forms the prime factor here. If successful, and implemented in systems, machine could soon ace analytical reasoning just like us.