While looking to try my hand at using Apple’s Core ML framework for on-device machine learning, I stumbled upon a lot of tutorials. After having tried a few, I decided to come up with my very own :P

Turns out, it was a good exercise! In this tutorial, I’ll walk you through the development of an app that uses a pre-trained Core ML model to detect a person’s age from an input image. So here we go.

Pre requisites:

1. MacOS (Sierra 10.12 or above)

2. Xcode 9 or above

3. Device with iOS 11 or above. (Good news — The app can run on a simulator as well!)

Now, follow the steps below to start your Core ML quest:

Create a Xcode Project

To begin with, create a new Xcode project — Single Page Application—and name it anything under the sun.

Setup to Add an Input Image

We need a setup to pick photos from our library to feed the model as an input for age prediction. Instead of giving you a link to a ready-made setup, I’ll quickly walk you through it (Disclaimer: Pictures will speak louder than my words for this setup):

Let’s start with the UI. Jump to your Main.storyboard . On your current view, drag the following components:

Drag two UIButtons : Camera and Photos — These will help you input an image, either with phone camera or photos library respectively

UIImage — This will display your input image

UILabel — This will display the predicted age

Main.storyboard after adding required components

2. Now quickly add constraints to each item on your view. See the pictures below for the constraints setting.

Constraints for Photos button

Constraints for Camera button

Constraints for UIImageView

Constraints for UILabel

3. Click on your assistant editor and draw outlets as follows:

UIImage outlet for the our image view

UILabel outlet for the label.

IBAction outlet for each UIbutton

IBOutlet for UIimageView

IBOutlet for UILabel

IBAction outlet for Camera UIButton