With the launch of mobile machine learning frameworks like TensorFlow Lite, it’s never been easier for mobile developers to develop new and exciting features into their apps. Powerful apps leverage machine learning under the hood in order to accomplish complex tasks like identifying crop disease or automatically creating captions for pictures, all in real time and without Internet connectivity.

In part one of this tutorial, we went over how to convert a custom model to TensorFlow Lite and discussed some tips and tricks to evaluate and trim the unnecessary layers in the TensorFlow graph. In the end, we prepared a model trained on MNIST data for inference.

The good news is that we’re finished with the hardest part: training and converting the model. To review, here are some details about our finished model (mnist.tflite):

input size of 1x28x28x1 (batch size x image width x image height x number of channels).

output size of 1x10 (classification of handwritten numbers from 0–9)

In this post, we’ll go over how we take that model and and create a simple Android app.

Adding the MNIST model to your app:

Let’s fire up Android Studio and add our converted model to the app.

First, create an assets folder (src/main/assets) and add your model.

2. Next, add the TensorFlow Lite dependency in your app’s build.gradle file.

dependencies {

implementation 'org.tensorflow:tensorflow-lite:+'

}

This downloads the latest stable version, but typically you’ll want to give your library a set version number for stable builds. Since TensorFlow Lite is in active development, you might want to use the nightly builds when you’re testing things out.

3. Stay in the build.gradle file and set these options so that the model does not get compressed when the app is compiled.

android {

aaptOptions {

noCompress "tflite"

noCompress "lite"

}

}

4. Re-sync your gradle file and make sure there are no build errors.