Core ML

Apple released Core ML at WWDC ’17, and it was updated to Core ML 2 this year. As a reminder, Core ML enables developers to integrate machine learning models into iOS and MacOS apps. This was the first big attempt in this field, and initially, developers really liked it for a number of reasons.

Core ML is optimized for on-device performance, which minimizes a model’s memory footprint and power consumption. Running strictly on the device also ensures that user data is kept secure, and the app runs even in the absence of a network connection.

Core ML’s biggest advantage is that it is extremely simple to use. Just a few lines of code can help you integrate a complete machine learning model. Since the release of Core ML, there has been a flood of innovative projects using it. However, there are limitations around what Core ML can do. Core ML can only help you integrate pretrained ML models into your app. So this means you can do predictions only, no model training is possible.

Thus far, Core ML has proved to be extremely useful for developers. Core ML 2, which was announced at WWDC this year, should improve inference time by 30% using techniques called quantization and batch prediction.

Create ML

Apple also announced Create ML at this year’s WWDC. Create ML allows developers to train machine learning models within Xcode using Swift and MacOS Playgrounds. Developers with no machine learning experience can train models and don’t have to depend on other developers.

Create ML has enhanced the utility of Core ML by forming a complete package of tools. Currently, Create ML supports three data types: images, text and tabular data. There are many training and testing algorithms like Random Forest Classifier and Support Vector Machines. Create ML also reduces the size of trained machine learning models and offers a way to train models using Create ML UI.

Training a model with CreateML

ML Kit

Firebase announced ML Kit at Google I/O 2018. ML Kit enables developers to utilize machine learning in mobile apps in two ways: Developers can either run model inference in the cloud via API, or run strictly on-device, just like with Core ML.

ML Kit offers six base APIs that are ready to use for developers, with the models already provided: Image Labeling, Text Recognition (OCR), Landmark Detection, Face Detection, Barcode Scanning and Smart Reply (coming soon). If these APIs don’t cover your use case, then you can also upload a TensorFlow Lite model and ML Kit takes care of hosting and serving your model to your app.

The on-device version of ML Kit offers low accuracy compared to the cloud version, but at the same time it offers more security to user data. The base APIs offered by ML Kit cover all the general use cases of machine learning on mobile platforms, and the option to use custom trained models makes ML Kit a complete machine learning solution for mobile platforms.