Starting out

Download and open the repository for this tutorial.

It’s easiest to set up Fritz using Cocoapods. In iOS/FritzSkyReplacementDemo starter project folder, run:

pod repo update

pod install

Open the FritzSkyReplacementDemo.xcworkspace in Xcode.

Overview

We’ll be using the Fritz iOS Image Segmentation feature to generate masks for pets in photos. The Fritz SDK comes with a variety of pre-built features that run directly on your phone.

All Fritz Vision APIs use a few constructs:

FritzVisionImage : The image that the model runs on. It will wrap the provided pixel buffer or another image you provide.

: The image that the model runs on. It will wrap the provided pixel buffer or another image you provide. Options : Configuration options that are passed to the model letting you tweak how the model runs.

: Configuration options that are passed to the model letting you tweak how the model runs. Model: The actual model that runs predictions on the input images.

The actual model that runs predictions on the input images. Results: The output of the model. Each predictor has a different type of results. In this tutorial, the results are a list of FritzVisionSegmentationResult objects.

Setup Fritz Account

Setting up a Fritz Account is easy. Follow the Getting Started directions to setup your free account and connect the demo to your account. Here are the steps you’ll run through:

Create a free account for developers. Create an iOS App. Make sure that the Bundle ID of your project matches the one you created. Drag the Fritz-Info.plist file to your project.

After you run through the initialization steps, build and run your app. When your app has successfully checked in with Fritz, you’ll see a success message in the webapp.

Using Sky Segmentation

Let’s use sky segmentation on an image and then replace the sky pixels with a new, sliding background.

Left: Original Image | Top Middle: Sky photo to replace | Top Bottom: Original image without the sky | Right: Combine background sky with the bitmap.

In our app, first let’s load 2 images:

A foreground image that we’ll run sky segmentation on.

A background image that we’ll use to animate across the sky.

Left: foreground image (mountains.jpg) | Right: background image (clouds..png)

let foreground = UIImage(named: "mountains.jpg")

let background = UIImage(named: "clouds.png")

Now we’re ready to pass the image into the model. Let’s walk through this step-by-step.

We start by setting some parameters that will be used to adjust the model output.

In step 3, we’ll use these parameters to adjust the model ouput when the buildSingleClassMask method is called.

2. We then create a FritzVisionSkySegmentationModel() that will be used to cut out the sky from the image.

private lazy var visionModel = FritzVisionSkySegmentationModel()

3. The foreground image is fed into the segmentation model which returns a FritzVisionSegmentationResult object. For more details on the different access methods, take a look at the official documentation.

The segmentation result contains information on the likelihood that each pixel belongs to a particular class (e.g. background or sky). We use the buildSingleClassMask method to create an alpha mask from just pixels that the model thinks does not identify as the sky.

4. That mask is combined with the original image to produce an image with a transparent sky.

5. Add the background UIImage loaded. In this case, we’ll use 2 background views and then animate them by creating a sliding motion that repeats indefinitely.

Here’s the final result:

Check out our GitHub repo for the finished code.