Currently heading the AI Lab at Cartesian Consulting, Ramasubramanian Sundararajan (RS) has over 15 years of work experience in applying machine learning and data mining techniques to real-world problems across various sectors. Given his diverse experience, Analytics India Magazine got in touch with Sundararajan to get his insights on how artificial intelligence is affecting human lives. Here are his interesting views.
Analytics India Magazine: What are some of the practical implementations of AI that have revolutionised the way humans are functioning? Would you like to elaborate on some AI-based products that you rely on the most?
RS: Revolutionising is a strong word. I don’t think we’re there yet. However, I can think of some that have quietly burrowed their way into our daily lives.
Great examples of the use of complex technologies are those which are transparent in their outcomes — to the point where people who use them hardly think about it. The auto-complete feature we use while texting on a smartphone is an example. (In fact, lately, it seems like every smartphone advertises every feature it has as being AI-enabled!)
Of course, given that I head an AI lab, AI affects my life in a very basic way– it pays the bills!
AIM: What are some of the AI products offered by the company? Please highlight some use cases.
RS: Right now, much of what we regard as AI comes specifically from machine learning. For a number of years now, we have been incorporating machine learning in our projects as appropriate.
However, in recent years, our focus has intensified, especially through a couple of solutions we are developing: Kyte, a tool to improve marketing communications; and Solus, a platform to enable the Segment of One strategy. Our recent work in this field has focused on using NLP to parse the content and style of marketing communications, building better recommender systems etc.
AIM: What are the areas of life or employment where you would like AI to be more involved (e.g. medicine, mental health care)?
RS: Healthcare is by far the biggest one I can think of. This is an area where AI can cut down the cycle time on critical tasks and bridge the gap between the need for quality healthcare and the availability of trained personnel. Advanced computer vision algorithms to annotate diagnostic images, NLP to trawl the ever-growing corpus of scientific literature to identify relevant research work, intelligent home care systems for the elderly… The potential is immense.
AIM: Will AI take away the creative thinking and downgrade the intellectual quotient of humans?
RS: To begin with, most AI applications are likely to focus on automating the more structured cognitive tasks that don’t require much creative thinking. So it can be argued that AI focuses human efforts on less routine, more creative aspects.
Having said that, we might also have a scenario where a person doing a job that requires a bit of creativity sometimes is replaced by a machine because someone somewhere decided that the automated solution was cheaper and more scalable. It’s the law of unintended consequences.
AIM: Many experts have warned against AI taking over every aspect of our lives. How true is their fear?
RS: Every aspect? No. Many aspects? Yes.
As with every major technological advance, there is a stage where you can choose not to use it, and then a stage where it’s easier to use it, and then one where it’s no longer viable not to use it.
And while I understand the fear that comes with this thought, I also think that we will learn to deal with it over the years, just as people learnt to deal with the idea that computers would do some of the things they did manually. The AI of today is yet another step in that evolution.
AIM: With voice-based assistants, facial recognition and other AI-based tech being rampant how will the user privacy and data be jeopardised?
RS: It’s a slippery slope.
I am inclined to think that many of these ideas arise from a desire to help people, but you can’t build AI solutions without data and lots of it at that. A lot of them probably don’t need to know who you are specifically, but the risk of privacy being lost does exist.
The ideal scenario is for end users to take the effort to understand what they’re signing up for and what data they’re giving up when they subscribe to something. If there are still enough people who are willing to provide their information and get certain services in return, excellent.
However, I understand that as an end user, keeping track of what information I am providing where, and understanding the implications of who might get to see and use it, can be exhausting. Forget making AI models interpretable – making this intelligible might be the next big challenge!
The law will also step in and tell companies what data they are entitled to have – we’re already seeing that.
AIM: How has been the adoption of AI in India?
RS: Mixed, and understandably so.
There’s a tremendous amount of curiosity right now because senior leaders read about the marquee applications and wonder if their own pot of gold also resides at the end of this rainbow. But these are complex systems, and finding out whether or not there is a sweet spot for such a system within one’s organisation takes a lot of work.
I would say that, not just in India but the world over, businesses are still in discovery mode as far as AI is concerned. It will take us a while to discover sustainable value.
AIM: What are the changes in policies and infrastructure that you would like to see for the better adoption of AI?
RS: In the private sector, I think AI will find its own equilibrium state. However, there may be underserved problems with implications for public welfare, in areas like agriculture and healthcare, that could use cutting-edge research involving AI.
The AI ecosystem needs a number of things to flourish: problems whose complexity cannot be addressed through traditional means, lots of data, tremendous computing power, and advanced algorithms. On complex problems impacting public welfare, I think the government could make a big difference by creating and publicising large data sources, inviting participation from AI practitioners through crowdsourcing and startup incubation/support etc. My understanding is that they are already making some strides in this direction, and one hopes to see a lot more in the years to come.
AIM: What are some of the other challenges that come on the way while adopting AI?
RS: The challenges are plenty.
First, the point is not to adopt AI but to adopt a useful solution that happens to involve AI. It is important to ask why before asking how.
Second, every model resides in an ecosystem that contains data that feeds into it, decision makers who use it, as well as other models it works in concert with. The more complex the model, the more complex the job of fitting it into this ecosystem.
Third, and this is a macro perspective, funding for AI-powered solutions within organisations will likely ebb and flow with the general fortunes of AI in the business world. This means that a good AI project might get nixed simply because it got conceived in the wake of a high profile AI project that didn’t work.
Fourth, there might be regulatory constraints imposed on who can use what data and how, and this might result in a few AI-powered solutions becoming simply unviable.
AIM: Will AI be capable of exhibiting emotions or getting a conscience? Will that be useful or harmful for us?
RS: Given what we know of AI in its current stage of evolution, probably not.
Machine learning isn’t concerned with identifying an underlying process so much as mimicking it. The implication is, the current approach to AI might not lead to a computer program that has emotions but can plausibly lead to one that can mimic emotions well enough that we can’t tell the difference most of the time.
I suspect that the term “Artificial Intelligence” conjures up the idea of a sentient being made of bit and bytes in most people’s minds. To the extent that I understand it, and I will admit that I am probably not au fait with the latest developments in the field, AI isn’t there yet.
The augmented reality system inside the Terminator’s eye? We can have that now. Skynet? Not yet, I think.
Try deep learning using MATLAB