An image I used to classify my cat Cleo

This year at WWDC, Apple announced new updates to the CoreML and announced new support for creating CoreML on a Mac. In the Vision for CoreML WWDC video the Apple Engineers demonstrated this. They took images of a computer part and fed those images into Turi Create, and out popped a CoreML model that could correctly identify what part it was.

Here’s a quick guide to make your own, even if you don’t know how to code!

All told it shouldn’t take you more than 30 minutes to create a simple model like this.

You’ll need a Mac running High Sierra with Xcode (9 or 10 beta) installed, and some photos to classify. If you don’t have something to classify, you can download the cats and dogs example photos Turi uses as an example.

If you are copy and pasting this, copy after everything after ~$ .

Open Terminal and copy the following to make sure Python is installed:

~$ python --version

If something is returned you’re good to go.

Next install pip with:

~$ sudo easy_install pip

Next install the latest version of virtualenv with:

~$ pip install virtualenv

Create a virtual environment with:

~$ virtualenv env

then activate the virtual environment with:

~$ source ~/venv/bin/activate

Install Turicreate in the virtual env with:

(venv) ~$ pip install turicreate==5.0b2

Okay now you have Turi Create installed and the virtualenv ready to go.

My other cat Ned 😸

Download Apple’s Vision CoreML example project. Open the project main folder and create a new folder with a name of what you are classifying. I named my folder PetImages. Create a folder to house images for the different objects you are classifying, I created one named Ned and the other Cleo.

Next drag and drop images into the images into their corresponding folders.

If you are using the cats and dogs example the folders will already be separated into Dog and Cat folders for you.

Create a new file image_classification.py and copy and paste this:

#!/usr/bin/env python import turicreate as tc DATA_PATH = "./PetImages" print("Loading data...") data = tc.image_analysis.load_images(DATA_PATH, with_path=True) data['label'] = data['path'].apply(lambda path: 'ned' if '/Ned' in path else 'cleo') COUNT_PER_CLASS=50 print("Limiting to {} images per class".format(COUNT_PER_CLASS)) cleo = data[data['label'] == 'cleo'].head(COUNT_PER_CLASS) ned = data[data['label'] == 'ned'].head(COUNT_PER_CLASS) data = cleo.append(ned) print("Creating model...") model = tc.image_classifier.create(data, target='label') model.save("NedCleoClassifier.model") model.export_coreml('NedCleoClassifier.mlmodel')

Update the name of the folder path and the names of the objects that you are classifying. Save the file and open terminal again and change to the directory where the file is located.

(venv) ~$ cd Desktop/ClassifyingImagesWithVisionAndCoreML/

Run the python script using:

(venv) ~$ python image_classifier.py

Once you do that Turi Create will work it’s magic and after some time you should see a CoreML model in the project’s folder. Drag that model into the ClassifyingImagesWithVisionAndCoreML Xcode project being sure to click the check box Copy items if needed .

Open ImageClassificationViewController.swift , and change the line that says let model = try VNCoreMLModel(for: MobileNet().model) , with a line like let model = try VNCoreMLModel(for CleoNedModel().model) . This tells the Xcode project to use your classifier and not the project default.

Lastly run the app on your iPhone.

Once the app has loaded test it by snapping a picture of one of the items you classified to see if it spits out the correct classification.

If you’ve followed all the steps in this project then your classifier should be good to go.

A couple of things to take note of:

The more photos you add the more accurate the model will become Keep the number of photos per object relatively close, otherwise the classifier will become out of whack Be sure that the images you use are cleaned and don’t have any misplaced photos (at first I accidently put some of Ned’s photos in Cleo’s folder)

There’s a lot more examples of CoreML Model’s you can create with Turi Create. Recommender systems, Image similarity, Object detection, and Text classifier examples can be found on the Turi Create User Guide.

Resources:

If something doesn’t work for you make a comment, or Tweet @ me, I’ll be glad to troubleshoot with you.