Proprietary knowledge has always been the key to survival in the world of technology. And currently, it’s Apple that has stolen the show with the launch of its iPhone 8. Last year, Samsung launched its promising flagship device Galaxy Note7, the first gadget to feature Iris recognition for mainstream devices. But with the Note7 fiasco, Samsung failed to get back into the game. But despite the criticism for not doing much in the AI space, Apple just changed the game for itself.
How Apple is using machine learning
Developing a smartphone with facial recognition is not easy. Combating challenges like the size of the device, Face ID’s launch has put Apple in a sweet spot. With Face ID, an iPhone X user simply holds up the device and it recognizes his or her face.
“Nothing has ever been simpler, more natural and effortless,” said Phil Schiller, Apple’s senior vice president of worldwide marketing. “Face ID is the future of how we unlock our smartphones and protect our sensitive information.”
At the core of the face detection ability, is the use of machine learning algorithms that have been enabled to learn who you are. Even with little alterations like glasses and makeup and maybe a beard too, the software will be able to recognise you, only to get better at it with time.
Apple has tapped multiple neural networks for this feature, and it doesn’t shy away from discussing how secure Face ID is. While Touch ID has a 1 in 50,000 chances that a random person could unlock the phone, Face ID brings that number up to a huge 1 in 1,000,000. The only thing that can easily foil the software, says Apple, is if you have an identical twin.
The face detection feature has been possible using a variety of hardware. The components are a Dot Projector (structured light transmitter), the Infrared Camera (structured light receiver), Flood Illuminator (Ambient sensor) and a proximity sensor. Finally, the iPhone X’s front camera gathers 2D data regarding a user’s face. The dot projector projects 30k infrared dots onto the user’s face when they stare at their phone.
For the Apple A11 in the iPhone X smartphone, Schiller said that the phone taps the neural engine in the chip. The A11 has an ability to speed machine learning and artificial intelligence processing for tasks such as face recognition, which is dubbed Face ID on the iPhone X.
The graphics processing unit in the A11 is also good at machine learning tasks. And it has an Apple-designed image signal processor, which can enable the camera to take better pictures and have good autofocus in low light.
Apple bets on augmented reality
Additionally, Apple executives spent much of Tuesday’s event describing how AR is at the core of the new flagship iPhone X. Its new screen, 3D sensors, and dual cameras are designed for AR video games and other more-practical uses such as measuring digital objects in real world spaces.
Months before the launch, Apple released a tool called ARKit that made it easier for developers to add AR capabilities to their apps.
The company is also working on smart glasses that may be AR-enabled, people familiar with the plan told Bloomberg earlier this year.
Apple has been slower to AR technology than Google, which launched an AR software tool called Tango back in 2014. But Apple’s ability to push the ARKit software to hundreds of millions of new and updated iPhones at once has already persuaded developers to start making new AR apps for these devices.
Apple’s earlier machine learning efforts
In June this year, Apple had unveiled a new machine learning framework API for developers today named Core ML, which would speed up how quickly AI tasks execute on the iPhone, iPad, and Apple Watch.
Core ML supports a number of essential machine learning tools, including all sorts of neural networks (deep, recurrent, and convolutional), as well as linear models and tree ensembles. Core ML is for on-device processing, meaning the data that developers use to improve user experience won’t leave customers’ phones and tablets.