A Brief Look Into the Core ML API

MLModel is the class that encapsulates the model.

We will be discussing the following important classes and protocols in the next sections:

MLFeatureValue

MLImageConstraints

MLFeatureProvider

MLBatchProvider

MLUpdateTask

MLFeatureValue

MLFeatureValue acts as a wrapper for the data. The Core ML model accepts the inputs and outputs in the form of MLFeatureValue .

MLFeatureValue lets us directly use a CGImage . Along with that, we can pass the image constraints for the model. It creates the CVPixelBuffer from the CGImage for you, thereby avoiding the need to write helper methods.

The following piece of code creates an MLFeatureValue instance from an image.

let featureValue = try MLFeatureValue(cgImage: image.cgImage!, constraint: imageConstraint, options: nil)

Now let’s look into MLImageConstraints .

MLImageConstraints

MLImageConstraints is responsible for feeding the correct size of the input image to the model. It contains the input information. In our case, that is the image size and image format.

We can easily retrieve the image constraint object from the model using the following piece of code:

let imageConstraint = model?.modelDescription.inputDescriptionsByName["image"]!.imageConstraint!

We just need to pass the input name ( “image” , in our case) to the model description.

MLFeatureProvider

An MLFeatureValue is not directly passed into the model. It needs to be wrapped inside the MLFeatureProvider .

If you inspect the mlmodel Swift file, the model implements the MLFeatureProvider protocol. To access the MLFeatureValue from MLFeatureProvider , there is a featureValue accessor method.

MLDictionaryFeatureProvider is a convenience wrapper that holds the data in a dictionary format. It requires the input name ( "image" , in our case) as the key and MLFeatureValue as the value.

If there are more than inputs, just add them in the same dictionary.

MLBatchProvider

This holds a collection of MLFeatureProviders for batch processing.

We can hence predict multiple feature providers or train a batch of training inputs encapsulated in the MLBatchProvider . In this article, we’ll be doing the latter.

An MLArrayBatchProviders contains an array of batch providers.

MLUpdateTask

An MLUpdateTask is responsible for updating the model with the new training inputs.

Required parameters

Model URL — The location of the compiled model ( mlmodelc extension).

extension). Training data — MLArrayBatchProviders .

. Model configuration — Here we pass MLModelConfiguration . We can use the existing model’s configuration or customize it. For example, we can force the model to run on the CPU and/or GPU and/or neural engine.

. We can use the existing model’s configuration or customize it. For example, we can force the model to run on the CPU and/or GPU and/or neural engine. Completion handler — It returns the context from which we can access the updated model. Then we can write the model back to the documents directory.

Optional parameters

progressHandlers — Here you pass MLUpdateProgressHandlers with the array of events you want to listen to, such as epoch start/end, training start/end.

— Here you pass with the array of events you want to listen to, such as epoch start/end, training start/end. progressHandler — This gets called whenever any of the events defined in the first case gets triggered.

To start the training, just call the resume() function on the updateTask instance.

Here’s a look at a pseudo code for training the data on a device:

let updateTask = try MLUpdateTask(forModelAt: updatableModelURL, trainingData: trainingData, configuration: model.configuration, completionHandler: { context in } updateTask.resume()

Now that we’ve got an idea of the different components and their roles, let’s build our iOS application that trains the model on the device.