Core ML framework as Apple documentation states enable you to integrate trained machine learning models into your app. Models should be in Core ML model format (models with a .mlmodel file extension). This is a new public format for describing models developed be Apple.

Apple provides ready-to-use Core ML models that were already converted to Core ML model format from popular open source trained models.

In one of WWDC 2017 sessions; Introducing Core ML, there is a demonstration on how to use a Core ML format model in your project.

In another interesting session Core ML in depth, there is a walk through on how to convert a trained model to Core ML model format using Core ML Tools which is a python package that supports converting models from these popular training libraries: Keras, Caffe, scikit-learn, libsvm, and XGBoost.

To be able to install Core ML Tools you should have a python environment with python version 2.7, otherwise you will get an error stating that “No matching distribution found for coremltools”.

For me I use Anaconda to manage packages and environments.

It comes with conda (which is package and environment manager), Python plus over 150 scientific packages and their dependencies.

If you don’t need all of that you can use Miniconda, a smaller distribution that includes only conda and Python, then you can install any package you need individually.

On the other hand we have pip which is the default package manager for Python libraries.

We may still use pip alongside conda to install packages because the available packages from conda are focused around data science while pip is for general use.

But why use environments? Environments enable you to isolate the packages you are using for different projects. So you can have both python 3 and 2 on the same machine.

So after installing Anaconda, you can create environment with python version 3 installed by typing following code in the terminal.

conda create -n py3 python=3 1 conda create - n py3 python = 3

and another one for python version 2.

conda create -n py2 python=2 1 conda create - n py2 python = 2

and you can list all the environments you have

conda env list 1 conda env list

You will get a list of them marking the current environment with an asterisk (root is the default environment).

# conda environments: # py2 /anaconda/envs/py2 py3 /anaconda/envs/py3 root * /anaconda 1 2 3 4 5 # conda environments: # py2 / anaconda / envs / py2 py3 / anaconda / envs / py3 root * / anaconda

So before installing coremltools you need to activate the environment you created with python 2

source activate py2 1 source activate py2

Now use pip to install coremltools package

pip install -U coremltools 1 pip install - U coremltools

By now we are ready to use this tool to convert models.

So let’s suppose you want to create an app that can predict emotions in a facial photo. You can try this by using open source Emotion Recognition trained model.

So you will begin to convert it to the Core ML model format. To do that you will need the following files:

.caffemodel file which contains the learned weights of the network as it was being trained . You can use VGG_S_rgb/EmotiW_VGG_S.caffemodel. .prototxt file which defines the network design or structure. You can use deploy.txt but you should change extension to .prototxt labels.txt file which contains the list of the named emotions in a specific order as mentioned by the author in comments.

Angry Disgust Fear Happy Neutral Sad Surprise 1 2 3 4 5 6 7 Angry Disgust Fear Happy Neutral Sad Surprise

Now you can create a python script that will convert those files to a Core ML format model. You can put that script in a file “conversion.py”

import coremltools caffe_model = ('EmotiW_VGG_S.caffemodel', 'deploy.prototxt') labels = 'labels.txt' coreml_model = coremltools.converters.caffe.convert(caffe_model, class_labels=labels, image_input_names='data') coreml_model.save('EmotiW_VGG_S.mlmodel') 1 2 3 4 5 6 import coremltools caffe_model = ( 'EmotiW_VGG_S.caffemodel' , 'deploy.prototxt' ) labels = 'labels.txt' coreml_model = coremltools . converters . caffe . convert ( caffe_model , class_labels = labels , image_input_names = 'data' ) coreml_model . save ( 'EmotiW_VGG_S.mlmodel' )

Parameter image_input_names means that we want the model to take an image as an input instead of multi-array.

Then in Terminal you run the script.

python conversion.py 1 python conversion .py

You will see a message stating that “Starting Conversion from Caffe to CoreML” then after some time depending on the model size, you will get the output file of model EmotiW_VGG_S.mlmodel

Now you have the Core ML format model which you can drag to Xcode and start to use it.

When you select the model in the Project navigator in Xcode you can see what kind of parameters this model has below “Model Evaluation Parameter” section.

As an input it expects a parameter “data” of type Image of 224 width, 224 height and RGB color space.

As output you can use the parameter “classLabel” of type String whose value is the most likely class label.