AI advances in healthcare are nothing new. What’s new is Deep Learning models diagnosing diseases with greater accuracy and research papers that claim diagnosis as good as a physician? From Deep Learning models that can detect suicidal tendencies to a Deep Learning algorithm developed by AI scientist Sebastian Thrun and his Stanford University team that can detect cancerous skin lesions as good as a leading dermatologist, DL has taken over diagnostic evaluations.
So, what’s driving the explosion of Deep Learning in healthcare. Broadly speaking, there are three main areas that have fueled AI growth: a) huge volumes of healthcare data (thanks to rapid digitization of medical records & EHR); b) the rise of GPUs that puts the power of deep learning in the hands of data scientists and researchers; c) running Deep Learning models hadn’t been very cost-effective, but now they are a fraction of that cost.
Deep Learning, in particular CNN plays a big role in medical imaging
According to Dr Dave Chanin, Founder and President of Insightful Medical Informatics, the value of deep learning systems in healthcare comes only in improving accuracy and increasing efficiency. He attributed the current interest of applying deep learning in healthcare to web giants Google and IBM that are leveraging unsupervised learning techniques to yield accurate results.
Also, the explosion of DL is not really seen in more consumer-facing applications, but in the imaging and informatics wherein algorithmic learning is applied to swathe of medical data that also includes images. The interest can also be attributed to Convolutional Neural Networks (CNN) that have been used in the field of computer vision for decades and now its deep architecture that enables multiple levels of abstraction is being leveraged for medical imaging analysis. So why are CNN ubiquitous in medical image analysis and have become the go-to methodology of choice for analyzing medical images.
The multi-stream architecture of CNN can accommodate multiple sources of information or representations of the in-put in the form of channels presented to the input layer. Since segmentation is the most common task in medical image analysis, CNNs can be applied to “every pixel in an image, using a patch or subimage centered on that pixel or voxel, and predicting if the pixel belongs to the object of interest”, this research paper notes. The other two major advantages of CNNs which are pre-trained on natural images, show good results, sometimes even challenging the accuracy of trained physicians in some tasks. Researchers have gone a step ahead to show that CNNs can be adapted to leverage intrinsic structure of medical images.
Besides the hardware development, the wide availability of open source packages and the GPU-computing libraries such as OpenCL, CUDA has fueled the popularity of CNNs in medical imaging. We also have the huge volumes of training data to build Deep Learning based medical imaging software. According to IBM researchers, medical images nearly account for at least 90 percent of all medical data, which makes it the largest data source in the healthcare industry.
Industry Impetus — M&As & Partnerships abound
Last year, chipmaker Intel hit the headlines with its purchase of deep-learning startups Nervana Systems and Movidius, the latter promised the rise of low power chip that can run neural networks. Movidius is a California based vision processor startup has a mobile-friendly system that makes it feasible to run neural networks in more places. Their Fathom USB sticks can run visual neural nets and will be extremely useful for researchers at universities. Meanwhile, Nervana Systems want to put Deep Learning in the cloud.
Another company making huge strides in healthcare is IBM which acquired Merge Healthcare, a leading provider of medical image handling and processing, interoperability and clinical systems in 2015 to tackle the problem of a lack of medical image data. For IBM, Merge’s technology platform which are used at more than 7,500 U.S. healthcare sites, as well as many of the world’s leading clinical research institutes and pharmaceutical firms to manage a growing body of medical images gives it access to a ready repository of training data. Now part of IBM’s Watson Health business unit, the acquisition helped the company bolster its ability to analyze and cross-reference medical images against 315 billion data points that already exist in the Watson Health Cloud, including lab results, electronic health records, genomic tests, clinical studies and other health-related data sources. Today, IBM is making great efforts in diagnosing cancer and tracking tumor development.
M&As aside, leading healthcare companies are forging partnerships to bolster development. San Francisco-based cloud based medical imaging startup Arterys tied up with GE Healthcare to combine its quantification and medical imaging technology with GE Healthcare’s magnetic resonance (MR) cardiac solutions. The startup provides a better visualization and quantification of blood flow inside the heart, alongside a comprehensive diagnosis of cardiovascular disease. In an industry-first, the startup also received an FDA clearance to leverage deep learning and cloud computing in a clinical setting with Arterys Cardio DL that provides automated, editable ventricle segmentations based on conventional cardiac MRI images that are as accurate as segmentations performed manually by experienced physicians.
It is true that AI is going to have the biggest impact on diagnostics and will help turn physicians into specialists who will end up looking at filtered cases instead of generalists who have to attend to low-priority cases. In a way, Deep Learning will help pave the way for AI-aided medical care. From DL trained models to diagnose diabetic retinopathy to vetting tumors, DL-based solutions are expanding the scope of radiology by predicting diseases at human-level accuracy. India is not far behind in this curve.
Bangalore-based AI startup SigTuple, co-founded by Apurv Anand, Rohit Kumar Pandey and Tathagato Rai Dastidar in 2015, leverages Deep Learning to improve diagnostic. The startup leverages recent advances in Deep Learning space for processing and analysing visual data. The startup has built algorithms which learn from medical data, and help doctors by automating disease screening and diagnosis. They enable access to these algorithms through low cost diagnostic devices and a cloud based intelligent platform. In fact, the startup gained a lot of traction amongst investors and media for its powerful intelligent screening.
Another Bangalore and San Francisco-based startup Qure.ai, hailed for having the most promising technology in India. The startup is leveraging Deep Learning technology to medical imaging data, thereby reducing physician’s workload and giving them more face-time with patients. The startup is building a deep learning system which will diagnose abnormalities from medical images. In a blog, the startup notes that most of the deep learning models are classification models that predict a probability of abnormality from a scan. However, just the probability score of the abnormality doesn’t amount much to a radiologist if it’s not accompanied by a visual interpretation of the model’s decision. The startup has made great strides in automatically identifying tumours and lesions in brains from MRI scans. In fact, the Qure.ai team was placed third in Brain Tumour Segmentation (BRATS) challenge at MICCAI 16. The startup is also taking steps to develop brain segmentation algorithms also known as multi-atlas segmentation algorithm.
Early Deep Learning Pioneers
Medical imaging startups have gained a lot of traction and there is a frenetic M&A activity in this space. According to Signify research, the total investment in medical imaging AI startups since 2014 is pegged at $167 million. Around half of the startups are building applications for multiple body areas while the rest are focused on specific clinical specialties, such as pulmonology, breast and cardiovascular. Some of the leading AI medical imaging startups are Pixyl, Viz, Zebra Medical Vision, VoxelCloud, AIdoc and Aidence among others.
However, the pioneer in deep learning medical imaging is Australian company Enlitic that leverages proprietary algorithms to quickly and accurately improve healthcare diagnosis. Founded in 2014, this medical imaging company is slotted as an early pioneer in using Deep Learning for tumor detection, and its algorithms have been used to detect tumors in lung CT scans. According to the CEO Jeremy Howard, the young company has also developed an algorithm that can identify relevant characteristics of lung tumors with a higher accuracy rate than radiologists.