A few weeks ago I looked at the speed of Core ML models running on various Apple devices. Apple’s new A12 Bionic chip paired with Core ML 2 made the latest generation of iPhones and iPads more than 10x faster than previous generations. While faster is usually better, large differences in performance across devices make it hard for developers to keep user experience consistent. Apple’s (relatively) small product family and aggressive upgrade strategy mitigates this problem to some degree. It’s not prohibitive to have a few devices around the office for testing. The same can’t be said for Android.

Fragmentation in both hardware and software makes it infeasible to test apps on every combination of chipset and Android version. We built Fritz to give developers the critical performance analytics they need to understand how mobile ML models are running in the wild. We measure model execution times on every device so you’re not flying blind.

I pulled data for TensorFlow Mobile models running on 70 different Android devices and compared their performance relative to the Google Pixel 2. Keep in mind this data is taken from devices being used in the wild. These aren’t laboratory benchmarks, so there might be some unintuitive results.