There are many situations when running deep learning inferences on local devices is preferable for both individuals and companies: imagine traveling with no reliable internet connection available or dealing with privacy concerns and latency issues on transferring data to cloud-based services. Edge computing provides solutions to these problems by processing and analyzing data at the edge of network.

Take the “Ok Google” feature as an example — by training “Ok Google” with a user’s voice, that user’s mobile phone will be activated when capturing the keywords. This kind of small-footprint keyword-spotting (KWS) inference usually happens on-device so you don’t have to worry that the service providers are listening to you all the time. The cloud-based services will only be initiated after you make the commands. Similar concepts can be extended to applications on smart home appliances or other IoT devices, where we need hand-free voice control without internet.

What’s more, edge computing not only brings AI to the IoT world, but provides many other possibilities and benefits. For example, we can preprocess images or voice data into a compressed representation on-device and then send it to the cloud. This approach resolves both privacy and latency issues.

During my time at Insight, I deployed a pretrained WaveNet model on Android using TensorFlow. My goal was to explore the engineering challenge of bringing deep learning models onto devices and making things work! In this post, I’ll quickly walk you through the process of building a general speech-to-text recognition application on Android with TensorFlow. I hope after this post you’ll be able to build your own DL-powered applications next time!

Figure 1. An overview of the process. Let’s look into these three steps that bring WaveNet on Android.

Environment Info

Pixel, cpu type: ARM64

Android 7.1.1

Android NDK 15.2

Android gradle plugin 2.3.0

TensorFlow 1.3.0

bazel 0.5.4-homebrew

Detailed tutorials and implementation can be found in my github repository.

STEP 1: Model Compression

To fit deep learning models onto mobile/embedded devices, we should aim for reducing the memory footprint of the model, shortening the inference time and minimizing the power usage. There are several ways to address these factors, such as quantization, weight pruning or distilling big models into a small one.

For my project, I used the quantization tools in TensorFlow for model compression. Currently I only applied weights quantization to size down the model as the full eight-bit conversion did not provide additional benefits such as reducing the inference time, based on the testing results on a Mac (fail to run the full eight-bit model on Pixel due to errors in requant_range). The time even doubled because the 8-bit quantization tool was not optimized for CPU. If you’re interested in learning more about the practical considerations on quantization, here is a great post by Pete Warden.

To quantize the weights for your model:

Write the model into a protocol buffer file. Install and build TensorFlow from source. Run the following command lines under your TensorFlow directory:

bazel build tensorflow/tools/graph_transforms:transform_graph

bazel-bin/tensorflow/tools/graph_transforms/transform_graph \

--in_graph=/your/.pb/file \

--outputs="output_node_name" \

--out_graph=/the/quantized/.pb/file \

--transforms='quantize_weights'

In my case, the size of the pretrained WaveNet model was down from 15.5Mb to 4.0Mb after quantizing the weights. Now move this model file to the ‘assets’ folder in your Android project.

STEP 2: TensorFlow Library for Android

To build an Android App with TensorFlow, I recommend starting with the TensorFlow Android Demo. For my project, I used the TF speech example as a template. The gradle file in the example helps us build and compile the TF libraries for Android. However, this prebuilt TF library may not include all the necessary ops for our model. We need to figure out the list of ops in WaveNet and make them into a compiled .so file for the Android apk. To find out the complete list of ops, what worked for me was to first output the graph details using tf.train.write_graph and then run the following command in the terminal :

grep "op: " PATH/TO/mygraph.txt | sort | uniq | sed -E 's/^.+"(.+)".?$/\1/g'

Next, edit the BUILD file in /tensorflow/tensorflow/core/kernels/ by adding the missing ops into the ‘android_extended_ops_group1’ or ‘android_extended_ops_group2’ in the Android libraries section. We can also make the .so file smaller by removing unneeded ops. Now, run:

bazel build -c opt //tensorflow/contrib/android:libtensorflow_inference.so \

--crosstool_top=//external:android/crosstool \

--host_crosstool_top=@bazel_tools//tools/cpp:toolchain \

--cpu=armeabi-v7a

And you’ll find the libtensorflow_inference.so file in:

bazel-bin/tensorflow/contrib/android/libtensorflow_inference.so

In addition to .so file, we also need a JAR file. By running:

bazel build

//tensorflow/contrib/android:android_tensorflow_inference_java

you’ll find the file at:

bazel-bin/tensorflow/contrib/android/libandroid_tensorflow_inference_java.jar

Now, move both .so and .jar files to the ‘libs’ folder in your Android project.

STEP 3: Data Preprocessing on Android

Finally, let’s process the input data into the format for which our model was trained. For audio systems, raw speech waves are transformed into Mel frequency cepstral coefficients (MFCC) to mimic the way that human ears perceive sounds. TensorFlow has an audio op that can perform this feature extraction. However, it turns out that there are some variations in implementing this conversion. As shown in Figure 2, the MFCC from TensorFlow audio op are different from the MFCC given by librosa, a python library used by the pre-trained WaveNet authors for converting their training data.

Figure 2. MFCC from librosa and TensorFlow audio ops are at different scales.

If you are training your own model or retraining a pretrained model, be sure to think about the data pipeline on device when preprocessing your training data. I ended up rewriting the librosa MFCC in Java to take care of the conversion.

Results

Figure 3 shows you the screenshots and examples of the app. Since there is no language model attached to the model and the recognition is at character level, you can see some misspellings in the sentences. Although not tested in a rigorous way, I did see a small drop in the accuracy after quantization and overall the system is sensitive to the surrounding noise.

Figure 3. Screenshots of the app with two examples.

The inference times reported below were averaged from 10 tests with a 5-sec long audio. The inference time increased slightly rather than decreased on both platforms because the weights quantization mostly helps with shrinking file size but not so much with inference time or power usage.

Table 1. Inference time before and after weights quantization. Tested with my Pixel phone and Macbook air.

What’s Next?

Here are two major things that could take this project further, which could also provide the community additional tutorials and walkthroughs for deploying a real-world speech recognition system on the edge.