Google’s latest I/O had no surprises but a slew of new product announcements. The two new Pixel 3 models, the Pixel Slate convertible laptop, and the all-encompassing Google Home Hub (a stiff competitor to Amazon Echo) came outfitted with a new set of capabilities. And all of Google’s machine learning and artificial intelligence updates are revolving around the new product releases. In fact, the new Pixel 3 boasts of new enhancements such as extra lenses to charge ahead of the competition.
One thing is for sure, Google is trying to use ML to improve photography. So instead of being hardware-dependent, the Pixel 3 is outfitted with Night Shot, which uses ML to dramatically enhance photography in low-light areas.
This feature will be introduced in the Pixel 3 and older Pixel models will also get this feature with the update. There’s also the Photobooth Mode, which uses will use AI to prompt the shutter on cues like a moving target, a smile, and the ability to change the amount of background “bokeh” blur. While the earlier I/O was all about Gmail that writes itself to a new Google Assistant and Google Home, in the latest event, the Mountain View giant announced a new set hardware enhancements in its smartphones.
However, going beyond the hardware announcements, there are a bunch of AI-related updates to its line of smartphones.
New Google Pixel = AI-Powered Software
Smart Compose: Already a feature in Gmail, Smart Compose is an iteration of Smart Reply which helps in finishing the sentences on emails. This feature will soon be rolled in the latest Google Pixel 3 smartphones which would allow users to perform a left-to-right swipe or populate a Smart Compose suggestion. In a few months, this new feature will also support other languages such as Spanish, French, Italian and Portuguese.
Google Lens: Google’s widely acclaimed Google Lens which uses advanced computer vision and natural language processing technology which is used in other Google software like Assistant and in smartphone cameras is one of the best examples of consumer-facing AI application which can be effectively used to identify street signs, restaurant menus and objects in photos. Now paired up with Deep Learning, this visual search feature is set to become more automated, a news report indicated and it rules out manual activation. It is also becoming more accessible and can be launched directly with a long-press action on the camera app. Lens is the best AI-powered tool which can add context to objects in the environment and a cutting-edge consumer facing AI application which allows users to see and sense their surrounding effectively.
Google Duplex: Duplex is one of the most talked about features in the Pixel phone which would allow users to call restaurants without the need for online booking systems from Google Assistant. Duplex is another consumer-facing example of natural conversations through Google Assistant. For natural conversations, Duplex has been trained in a set of domains to carry out general conversations. Essentially, Duplex is a recurrent neural network (RNN) and is built using TensorFlowxt Extended (TFX). For greater accuracy, Duplex’s RNN was trained on a dataset of anonymised phone conversation data. To better understand the parameters of conversation, Duplex also leverages Google’s automatic speech recognition (ASR) technology and uses hyperparameter optimisation from TFX to further improve the model.
Call Screen: Another outstanding feature debuted in the recent I/O event is Call Screen through which users can pick up an incoming call, by tapping the “Screen call” button which will enable the Google Assistant to receive the call. This is positioned as a Screening service from Google and offers suggestion cards/chips such as Who is this or I’ll call you back to allow Assistant to reply back effectively. The Call Screen feature is completely powered by on-device machine learning and launching on the Pixel 3, before coming to older Pixel devices.
Try deep learning using MATLAB