Running the Model on Mobile Devices

Now that the model is in Caffe2, we can convert it to a format suitable to run on mobile devices. This can be achieved using Caffe2’s mobile_exporter . We generate two model protobufs; one for initializing the models with the correct weights and the second one that runs and executes the model. There are a couple of steps to this process:

Extracting the workspace and the model proto from the internal representation. Importing the Caffe2 mobile exporter Calling the Export to get the predict_net , init_net , both needed for running the model on mobile. Saving the init_net and predict_net to a file we’ll use for running them on mobile.

init_net has the model parameters and the model input in it, while the predict_net will guide the init_net execution at run-time.

We run the generated init_net and predict_net in Caffe2 using a cat image to verify that the output (high resolution cat image) is the same in both runs. We start by doing some standard imports:

We then use Python’s Skimage to process the cat image, the same as we would while doing data processing in neural networks. After loading the image, we resize it to 224x224 dimensions and save the resized image.

The next step is to take the resized cat image and run the super-resolution model in a Caffe2 backend and save the output image. The following steps are involved in this:

Loading the resized image and converting it to Ybr format. Running the mobile nets that we generated so that the Caffe2 workspace is initialized correctly. Using net_printer to inspect what the nets look like and identifying what the input and output blob names are.

Next we pass in the resized cat image for processing by the model and then run the predict_net to get the model output.

Next we construct the final image and save it.

Let’s now execute the model on a mobile device and obtain the model output. The following steps are involved in doing this:

Specifying a binary that will be used to execute the model on mobile and exporting the model output to be retrieved later Pushing the binary and init_net and proto_net we had saved earlier Serializing the input image blob to a blob proto and then sending it to mobile for execution Pushing the input image blob to adb Running the net on mobile Getting the model output from adb and saving to a file Recovering the output content and post-processing of the model using the same steps followed earlier Saving the image

Conclusion and Further Reading

You can compare the cat_.jpg from the pure Caffe2 execution and the cat_mobile.jpg from the mobile execution. If the two images don’t look the same, it means that something went wrong during the mobile execution. For further reading on Caffe2 mobile, check out this AI Camera Demo and Tutorial.

Discuss this post on Hacker News and Reddit