Image analytics is the automatic algorithmic extraction and logical analysis of information found in image through digital image processing techniques. With an explosion of image data, which makes up about 80 percent of all unstructured big data, there is a growing need of analytical systems to interpret images, which is unstructured data to machine readable format. Case in point — the use of bar codes and QR codes are simple and popular examples of image analytics that we encounter in our day-to-day lives.
Today, AI-enabled smartphones are outfitted with facial recognition apps, another high-level example of image analytics. Other commercial purposes include applying machine learning algorithms to medical imaging diagnosis and object recognition in self-driving technology. Today, image analytics is also applied to social media monitoring, for analyzing brand logo, UGC images and rich media formats.
Some of the areas where image analytics is playing a huge role: a) medical image analysis; b) Detecting species of animals or plants; c) retail; ecommerce; d) social media monitoring
Breaking Down Image Analytics
According to Fritz Venter and Andrew Stein , the main purpose of image analytics is to convert the unstructured form of images and videos into a machine analyzable representation of a set of variables or making it analytically prepared data. In the first step, images are segmented into structured elements and prepped up for feature extraction – or as Stein and Venter put it, the identification of low-level features in the image. During the second transformation step, the application detects the relationships between the features, variables and time. The third transformation step is the extraction of variables with time-stamped values. In the image processing application, a variable is represented by a series of values related to an entity (for example ,emotion or customer sentiment. Each value is time stamped, so that it makes it possible to treat a variable as a time series. Essentially, image analytics transforms the image-input, adds value and create a rich set of time series as analytically prepared data output, cites Stein and Venter.
Image segmentation is the process of partitioning an image into a collection of connected sets of pixels. Image segmentation helps to identify certain features in the image.
Recent Tech Developments in Image Analytics
Mountain View search giant has made considerable strides in advancing image analytics. Google Goggles was one of the earliest image recognition mobile app and now Google Lens builds on Google’s capability.
Google Lens: Launched in 2017, the AI-powered computer vision tool unveiled last year in the I/O developer conference, helps identify text, location, landmarks, and even pull up information from websites efficiently. It also works as a barcode and QR code scanner.
Face ID on iPhoneX: the popular face recognition feature on iPhoneX the biggest feature in the latest iPhone is an enormous leap forward in computer vision.
According to Indian IT bellwether Infosys, the pattern of extracting information from image has paved the way for cutting edge analytics. Global retail biggies such as Walmart, Home Depot analyze satellite imagery of parking lots to accurately forecast earnings with raw data that can be trusted. Similarly, on a larger scale, analysts are able to predict global economy pattern and how the world’s economy will shape through satellite images of oil storage and agriculture patterns.
Popular Image Analysis Tools For Developers
Today, startups and developers have access to a host of powerful Computer Vision APIs that allows them to efficiently extract, identify, tag and train visual content with powerful machine learning algorithms. For startups with large-scale image production workloads that don’t want to invest in proprietary technology, these tools are the best way to develop image recognition capabilities in house. All the tools have a basic free version for a specific time period and has to be upgraded after that.
Google Cloud Vision API: Google’s Cloud Vision API is a powerful image analysis service that enables one to understand the content of images and extract high-level features powered by cutting-edge machine learning models in an easy-to-use REST API. According to Google, the Cloud Vision API classifies images into thousands of categories, detects individual objects and faces within images, and also effectively finds and reads printed words contained within images. Developers and startup interested in image recognition tasks can start using the API for free for 1,000 images per month.
Microsoft Azure Computer Vision API: From analyzing images, tagging and identifying content in the image and labeling it to Optical character recognition (OCR) — reading text in images and extract the recognized words into a machine-readable character stream, Microsoft’ Vision API is a powerful set of computer vision tool that allows developers to access o advanced algorithm. The Computer Vision API is available for a free trial.
Watson Visual Recognition: With Watson Visual Recognition, one can effectively get insights into the visual content by analyzing images for scenes, objects, faces, colors, food, and other subjects. In the next step, one can create and train your custom image classifiers.
Amazon Rekognition: This is definitely gaining popularity amongst developers and provides a host of features such as sentiment analysis and scene detection. The consensus is that the global giant offers a more robust suite of facial analysis tools, such as face recognition across images, and even drills down to the minute details such as face quality and facial comparison.
1.Fighting California Drought With IBM Watson Visual Recognition
OmniEarth, an team environmental technology company used IBM Watson Visual Recognition service to process, clarify and join massive amounts of satellite and aerial imagery with other data sets to provide granular data on water consumption.
- Based on the analysis, the firm identified land parcels that should reduce water reusage and should be scaled back.
- Watson Visual Recognition mined through OmniEarth’s massive data to identify swimming pools in 150,000 parcels in just 12 minutes—a task that would’ve taken up hours or days if done manually.
- Based on the insights, water districts were able to take data-backed decisions to scale water usage and make effective recommendations in drought areas.
2.Predicting Grey Goose’s Hidden Audience With Visual Listening
Image analytics is now increasingly leveraged in social media monitoring to find more insights through visual content. With UGC content growing every day, brands are augmenting social listening (text mining data from social channels) with visual analysis to get a bigger picture. One of the most common use cases of Visual Listening is Logo identification — a stronger metric of ROI on sponsorships.
Boston headquartered firm Crimson Hexagon, a leading provider of consumer insights from social media helped Grey Goose, a leading vodka brand to understand their core audience.
- The Boston company used both image and text data to find the results. The insights from text data, culled from Twitter conversations showed the brand’s core audience as predominantly male, in the 35+ category
- Meanwhile Twitter photos revealed a hidden audience — more females (+22%) and fell into a younger age category.
Other applications of Image Analytics include crop yield count, Object-ID based dynamic navigation, vehicle tracking with track-loss-and-recover capability and Asset tracking and identification which is leveraged by logistics startups.
Try deep learning using MATLAB