This blog was featured in Android Weekly’s #383 issue

With AutoML Vision Edge, you can create custom image classification models for your mobile app by uploading your own training data.

Firebase ML Kit has a lot of features that allows you to perform machine learning on the user’s phone. AutoML allows you to create a custom solution exactly for your problem, the best part is you don’t need to know machine learning for building your solution. You just have to upload images and AutoML takes care of everything for you.

In this blog post we will build an app called SeeFood, the app sees food and tells you what food item it is. Yes, I am a Silicon Valley fan 😜 Here’s a glimpse of the app.

Before we start coding the Android app, let’s see how can we generate the dataset. You can either download a sample dataset or create your own dataset by downloading the images you need. If you are wondering how to get so many images for your dataset I have an awesome tool for you.

This script lets you download hundreds of images from Google images with just one command. I used this script to get the images of food items for the app.

Let’s get started

When you open the Firebase console in the ML Kit section after selecting the AutoML tab, you will have the option of uploading your training data either in a zip file or in a CSV file containing the Cloud Storage locations.

If you are uploading the images in a zip file, it should have the following structure:

If your Firebase project is on the Spark plan, you can only upload one dataset per project having at max 1000 images. You can only train your images for 3 hours in the free plan but that’s more than enough time for 1000 images.

Before you start the training, you have to select what kind of model you want. The model with higher accuracy will also have a bigger size and higher latency. You can chose the best option according to your use case.

Usually the model gets trained quicker than the estimated time and you will receive an email as soon as the training is done. Now you can evaluate your model by giving it an image to see what your model thinks about the image. You can also see what score threshold works best for your model and then set that value in the app. The score threshold is the minimum confidence the model must have for it to assign a label to an image.

Now that your model is ready before you start coding, you have to make a decision of how will you be using the model. You have three options:

Download the model and bundle it with the app. (Increases APK size) Publish the model in Firebase and download it once when the user installs the app. This way whenever you update the model, you don’t have to make a Play Store release, your app will download the latest version. You can do both, bundle the model with the app and also publish it online.

Personally I like the second approach because the APK size remains low and after downloading the model once, everything works in the same way.

Enough talk, let’s dive into the code

If you haven’t already, add Firebase to your Android project.

Let’s start by adding the dependencies

Implement the camera functionality

I am using CameraX for implementing the camera functionality, I would recommend using CameraX because it gives direct access to the frames of the camera with just few lines of code which you are soon going to witness.

Load the model

As discussed earlier, you have various ways of loading the model. You can load it either remotely from Firebase, from local storage or both. I am loading it remotely.

Configure a remotely hosted model

To configure a model hosted in Firebase, we need to register it and specify all the required conditions according to our needs as it has been done in the code snippet below.

As you can see, we have mentioned that we want to enable updates and we have specified the name of the model. I would recommend passing the name of the model through your own server and not hardcoding it in the app because every time you update your model in Firebase, you have to change the name of the model. 😦

Prepare the input image

We need a FirebaseVisionImage object to process and extract labels from the image. You can create a FirebaseVisionImage object from a media.Image object, a file on the device, a byte array, or a Bitmap object.

Along with an image we also need the rotation values of the image since we are using CameraX we can convert rotation values of Camera X to one of ML Kit’s rotation constants. The following method does exactly that

Now we can use Camera X’s ImageAnalysis classes to get the image and then convert it into a FirebaseVisionImage

As you can see we have passed mediaImage and imageRotation as parameters to create a FirebaseVisionImage.

Run the image labeler

We require a FirebaseVisionImageLabeler and we need to configure it with the name of the model and the confidence threshold we want. This is the same threshold that we evaluated in the Firebase console. If you don’t specify any threshold it takes the default value of 0.5f

So before we start processing the image, it is recommended that we first check if the model has been downloaded successfully and proceed only if it has been downloaded. The following method calls the success listener as soon as the model is downloaded or in case you have already downloaded the model once, the method immediately calls the success listener.

Now we can finally process the image and see what all labels our model was able to find in our image.

If the image labelling is successful, we get an array of FirebaseVisionImageLabel objects from which we can extract useful information in the following way

That’s it, you have made it to the end. Congrats! Now you have a classifier that you can use to classify anything you want and after downloading the model once, it will run always on the device without making any API calls.

The link of the project is given below, you can directly use the code to make a classifier of your choice, you just need to upload your own training data in Firebase and you are good to go.

Thanks for reading! If you enjoyed this story, please click the 👏 button and share it to help others!

If you have any kind of feedback, feel free to connect with me on Twitter.