Building the App

Let’s start by creating a new Xcode Project using a Single View App template:

You can name it whatever your heart desires, I will be using “PokedexML” as the product name.

(In my project, I renamed my ViewController.swift file to MainVC.swift, so I will refer to it as such)

First, we need to set up a way to add child view controllers to our MainVC (and other UIViewControllers) by creating an extension below the last curly brace in the file.

Next, we’ll be setting up some functions to fill later on in the project. For now, create empty functions called addCamera() and updateImage() .

Now let’s create a rectangle in the lower left-hand corner of the screen that we will use to display the currently evaluated image.

Add a function called addImagePreview() , and populate it as follows:

Add two more functions called addClassificationBox() and addClassificationLabel() . These will be responsible for creating the area where we’ll display the classification of each Pokémon.

Populate these functions as follows:

Next we’ll create the Camera View Controller, which will be used to show a live camera screen on top of our MainVC and enable us to take pictures of our targeted Pokémon.

Right click on the main file folder and select “New File”. Then create a new UIViewController named CameraVC.

In order to separate the camera logic and the view itself, we’ll create another swift file called CameraView. (Following the same steps as before except selecting UIView this time.)

Make sure you import from UIKit and AVFoundation, and add the code above. The view will be very simple and add the videoPreviewLayer as an accessible variable.

Now we can go back into the CameraVC class that we created and begin working on accessing the camera.

First, at the top of the file, import from AVFoundation and UIKit. Then, add two constants “captureSession” and “previewView” as AVCaptureSession() and CameraView() respectively.

Since we will be using this view controller as a child of MainVC, we will override the didMove(toParent) function.

In the following steps we will be creating two functions to put inside didMove(toParent) , one to configure the camera, and one to link it to our CameraView.

configureCamera :

setupCameraView :

Now that those two functions are finished, we can create a way to capture photos!

We will start by creating an extension of the CameraVC at the bottom of the file and conforming to the AVCapturePhotoCaptureDelegate protocol. The extension should look like the following.

This function allows us to utilize the AVCapturePhotoCaptureDelegate protocol, and create a UIImage from the data captured by the camera. We then send this UIImage back to the MainVC to be displayed in the image preview and ready to be classified.

Now in the main code section, we will conform to the UIGestureRecognizerDelegate in order to add our tap gesture to capture the photos.

Let’s create a function called setupTapGesture() as shown below:

You’ll notice that we have an unknown function called capturePhoto() , so let’s declare that as well.

Here we have the @objc tag in front because we are using the #selector method to call it.

Let’s head back to MainVC to fill in our empty functions.

addCamera :

updateImage :

Before you’re able to build and run, we need to add some permissions in the info.plist file.

In the info.plist file, add the Privacy — Camera Usage Description and set a value for the string. This will be the description when asking the user for permission to use the camera.

If you build and run the project now, you should see that a live camera is displayed, and tapping the screen will update the image in the lower left corner.

Now all there’s left to do is send this image to the model that we created, and we’ll have our completed “Pokédex!”

Adding the brains

Now that we have the base of the app built, we can get to the brains of our “Pokédex.”

Let’s begin by opening the place where you saved the PokémonModel from the first section. Click and drag it into the Xcode project.

The great thing about Core ML is that it creates a Swift model class for your custom .mlmodel file! This means that Xcode handles all of the heavy lifting to convert the model into a Swift compatible class.

Making the ClassificationController

Create a new blank Swift file called ClassificationController. This is going to handle all of the logic of interfacing with the PokemonModel.

This class is going to be using UIKit, Core ML, Vision, and ImageIO, so be sure to import them at the top of the file.

First, let’s create a new protocol underneath the class declaration called ClassificationControllerDelegate, with a function called didFinishClassification . The function should take a tuple in the form (String, Float). This will help us to send the classification back to our MainVC later on.

Inside the ClassificationController class, declare a constant for the delegate. Now we need an initializer that takes a ClassificationControllerDelegate as an argument.

Next, create a function called processClassifications. This function will take a request of type VNRequest, and an optional error.

This function will return the classification of the request that is made to the model, and use the delegate method didFinishClassification to return a tuple containing the string of the classified pokemon and the confidence.

The last two components are the classificationRequest lazy variable and updateClassifications function.

classificationRequest :

The classification request accesses the PokemonModel’s automatically generated model class and sets the image crop setting to a center crop.

updateClassifications :

This function is what we’ll use to pass a UIImage from MainVC into the ClassificationController. This sets up the orientation that the UIImage will be sent as to the request, and transforms it into a CIImage. Then, we use an asynchronous DispatchQueue to perform the classificationRequest that has been created.

It’s now time to go back into the MainVC and add a ClassificationController instance.

First, let’s create another extension of MainVC to conform to the ClassificationControllerDelegate. Inside, create the required didFinishClassification function and display the classification with the classificationLabel that we declared earlier.

Since the Float portion of the classification tuple is the confidence level of the classification (from 0 to 1), we will use that to check if the classification is at least 60% confidence before displaying a Pokémon name. If it is below 60%, we will display an error message with the classificationLabel.

Now we can add a variable called classifier of type ClassificationController and a private function to instantiate it with MainVC as the delegate. This function will be called inside of the viewDidLoad function.

Create a new function called evaluateImage that takes a UIImage, and call the classifier’s updateClassification function. Call this function inside of the existing updateImage function.

With this last edition, you should be able to build and run the project and try it out on any of the starter Pokémon from the first three generations, and of course, the all important Pikachu!

Now we’re all done! To check out my full project with API calls and fancy graphics/animations, please check out my original GitHub repository: