Google, as well as Apple, have introduced their machine learning software development kits over the last couple of months. Their focus has been on easing the development burden of optimising large artificial intelligence and machine learning models and datasets for mobile apps. At this year’s WWDC, Apple Senior VP for software engineering Craig Federighi released a new version of their ML kit for basic training. Last Month, even Google introduced their ML Kit for app developers who are not much proficient in the area.
In this article, we will take a look at how these machine learning kits stack up against each other:
Core ML 2 allows easy integration of ML models. This new framework will help developers build intelligent apps with a minimum code. During the WWDC 2018 keynote, Federighi said, “Core ML 2 is 30 percent faster thanks to batch prediction, and it can compress machine learning models by up to 75 percent with the help of quantisation.”
Batch prediction is the practice of predicting for multiple inputs at the same time. For example, identifying four images at the same time, instead of predicting these images one by one. Quantisation refers to the practice of representing weights and activation in fewer bits during inference than during training.
The first version of Core ML was introduced in June 2017 with the launch of iOS 11. Core ML provides high-performance implementation through deep neural networks, recurrent neural networks, convolutional neural networks, support vector machines, tree ensembles, linear models.
The new version of Core ML can update models from a cloud service like Amazon Web Services or Microsoft Azure at runtime. It also comes with a converter that works with Caffe, Caffe 2, Keras, sci-kit-learn, XGBoost, LibSVM and TensorFlow Lite.
It also introduced Creat ML, a new GPU-accelerated tool for developers to help them easily build machine learning models using Swift and Xcode on their Mac. This tool supports vision, natural language and custom data.
Because it is built on low-level technologies like Metal and Accelerate, Core ML seamlessly takes advantage of the CPU and GPU to provide maximum performance and efficiency. Developers can run ML models on the device so data does not need to leave the device to be analysed, claims Apple.
During the keynote, Craig Federighi also cited an example of how the company Memrise, a website where people learn multiple languages, is using Core ML toolkit. In the past, the developers took 24 hours to train an ML model using 20,000 images. Now, with Create ML and Core ML 2, they reduced this time to only 48 minutes on a MacBook Pro and 18 minutes on an iMac Pro. It also enabled them to reduce their machine learning models from 90 megabytes to 3 megabytes.
Last month at the I/O developers conference, Google announced Machine Learning Kit for app developers who are not much proficient with machine learning. The new software development kit is a cross-platform suite of machine learning for its Firebase mobile development platform.
Creating a machine learning model needs plenty of work. First, you have to learn how to use an ML library, acquire training data to teach your neural net to do a task, and create a model that is light enough to run on a mobile device. Firebase and Machine Learning Kit makes the job much easier for developers, as it simplifies the process of building a model by just making certain ML features an API call on its Firebase platform.
“Whether you’re new or experienced in machine learning, you can implement the functionality you need in just a few lines of code. There’s no need to have deep knowledge of neural networks or model optimisation to get started,” said Google.
It supports text, image labelling, barcode scanning, facial detection, landmark recognition. The SDK for app developers on iOS and Android are both available online and offline, depending on developers’ preference and network availability. The online cloud-version offers higher accuracy in exchange for using some data, while the offline version works even if you don’t have internet. For instance, the offline version will of the API will be able to identify a dog in a picture, while the online version could determine the specific dog breed.
The new SDK also uses three existing API technologies – TensorFlow Lite, Google Cloud Vision API and Android Neural Networks API.
Other ML Kits
Not only Apple and Google, tech companies like Nvidia and Intel has also supported AI and machine learning developers for a while with its deep learning SDK.
The NVIDIA Deep Learning SDK offers powerful tools and libraries to data scientist to design and deploy deep learning applications. It contains solutions for both neural network training and in inference. Introduced in 2016, the SDK requires CUDA toolkit for building new GPU-accelerated DL algorithms. It includes libraries for deep learning primitives, inference, video analytics, linear algebra, sparse matrices and multi-GPU communications. According to NVIDIA, the kit brings high-performance GPU acceleration to widely used deep learning frameworks such as TensorFlow, Caffe, Theano and Torch.
Intel DL software development kit allows developers to visualise different aspects of deep learning process in real-time. It supports both local and remote installation options and can be installed to a Linux server remotely from Windows or macOS devices. It helps data scientist to easily prepare training data, design models and train models with advanced visualisation and automated experiments. The SDK simplifies the installation and usage of deep learning framework optimised for Intel platforms.
Which Is Better?
Choosing between Core ML 2, Google ML Kit, NVIDIA and Intel DL kit mostly depend on the personal preference of the developer. Though Google ML Kit is still on the beta version, it provides a lot of prebuilt machine learning models and APIs from which you will get to choose, including APIs for contextual message replies and barcode scanning and is available for both Android and iOS developers.
Though Apple said that the new version of Core ML has become more efficient, it not as flexible as Google’s ML kit. Core ML 2 is not a cross-platform suite, that is, it does not support Android. It is only meant for building models for iOS devices. But its Vision API and natural processing Framework makes it easy to build apps with on-device face detection, barcode scanning, text analysis, name density recognition, among other feature. On the other hand, the Intel deep learning SDK is a free package of tools that data scientist and software developers can use to experiment models around deep learning solutions. The NVIDIA SDK is useful for both advanced deep learning researchers as well as applied deep learning practitioners.
Overall, the basic idea is about democratising machine learning and AI. As artificial intelligence and machine learning is touching every corner of the industry, the democratisation of these new technologies is inevitable.