Image by FunkyFocus from Pixabay

Impressed on the Machine Learning demo using Google ML Kit shown on Flutter Live ’18, we explore the same with on‑device machine learning instead of cloud hosted.

Running the machine learning models on mobile devices is mostly resource demanding. So we have chosen TensorFlow as it has solved many of those constraints with its TensorFlow Lite version.

In that flutter demo, they have used ImageStream from flutter camera plugin and detected the object using Google ML Kit.

Here through this article, we are exploring the Image Streaming option with TensorFlow Lite and detect the object with YoloV2 Modal on Android.

Why TensorFlow Lite?

From its definitions,

TensorFlow Lite has a new mobile-optimized interpreter, which has the key goals of keeping apps lean and fast. The interpreter uses a static graph ordering and a custom (less-dynamic) memory allocator to ensure minimal load, initialization, and execution latency. TensorFlow Lite provides an interface to leverage hardware acceleration, if available on the device. It does so via the Android Neural Networks API, available on Android 8.1 (API level 27) and higher. Recently, they have even released developer preview version with GPU backend leverages for running compute-heavy machine learning models on mobile devices.

When I have started the exploration, I have noticed most of the examples were related used TensorFlow Mobile, the previous version which is depreciated now. TensorFlow Lite is actually an evolution of TensorFlow Mobile and it is the official solution for mobile and embedded devices.

Preparing Model

I have taken Tiny Yolo v2 model which is a very small model for constrained environments like mobile and converted it to Tensorflow Lite modal.

Yolo v2 uses Darknet-19 and to use the model with TensorFlow. We need to convert the modal from darknet format (.weights) to TensorFlow Protocol Buffers format.

Translating Yolo Modal for TensorFlow (.weights to .pb)

I have used darkflow to translate the darknet model to tensorflow. See here in for darkflow installation.

Here are the simple installation Steps and I have changed the offset value to 20 in loader.py before installation



git clone

cd darkflow pip install Cythongit clone https://github.com/thtrieu/darkflow.git cd darkflow sed -i -e 's/self.offset = 16/self.offset = 20/g' darkflow/utils/loader.py python3 setup.py build_ext --inplace

pip install .

I move the downloaded Yolo weight to dark flow and convert them to .pb format.

You will see two files under built_graph directory .pb file, a .meta file. his .meta file is a JSON dump of everything in the meta dictionary that contains information necessary for post-processing such as anchors and labels

Converting TensorFlow format (.pb) to TensorFlow Lite (.lite)

We can't use the tensorflow .pb with TensorFlow Lite as it use TensorFlowLite uses FlatBuffers format (.lite )while TensorFlow uses Protocol Buffers.

With TensorFlow Python installation, we get tflite_convert command line script to convert TensorFlow format (.pb) to the TFLite format (.lite).

pip install --upgrade "tensorflow==1.7.*" tflite_convert --help

The primary benefit of FlatBuffers comes from the fact that they can be memory-mapped, and used directly from disk without being loaded and parsed.

We need to define the input array size, I set to 1 x 416 x 416 x 3 based on yolo model input configuration (1 x width x height x channels). You can get all the meta information from the .meta json file.

tflite_convert \

--graph_def_file=built_graph/yolov2-tiny.pb \

--output_file=built_graph/yolov2_graph.lite \

--input_format=TENSORFLOW_GRAPHDEF \

--output_format=TFLITE \

--input_shape=1,416,416,3 \

--input_array=input \

--output_array=output \

--inference_type=FLOAT \

--input_data_type=FLOAT

Now , the .lite file built_graph/yolo_graph.lite is ready to load and run in tensorflow lite.