Core ML was released in June at WWDC 2017. They “introduc[ed] machine learning frameworks just for you guys to enable all the awesome things.” In the announcement, they promised image recognition, word prediction in keyboards, showing pictures of only red flowers all happening directly on the device. So 8 months later, what awesome things are we doing with Core ML and AI on mobile devices?

Not Hotdog app in action!

There have been a few notable apps that made some waves on Hacker News (Not Hotdog, InstaSaber), but most of the machine learning advances promised by Apple haven’t shown up in many apps yet.

Even with Core ML, it is still not easy to deliver a production ready app using machine learning

There are still many steps to get a machine learning model running on a mobile device. Most of the time, you have to train a model using TensorFlow, hope that it successfully converts to Core ML, and then write code to interface with low level inputs such as accelerometer data and camera outputs. The steps involved definitely are not straightforward and have pitfalls not easy for everyone to navigate.

But fret not! Developers are building apps using Core ML. I wrote a script to search GitHub for repos that include Core ML models in the repository. The search yielded many interesting projects and trends.

Adoption has been relatively slow but has steadily increased over time. There was a flurry of activity after the WWDC announcement. Apple provided a handful of open source models (MobileNet, SqueezeNet, GoogLeNet, ResNet50, and VGG16) pre-converted to Core ML files. About 70% of the repos use a pre-converted Core ML model.

Over 50 of the repos forked or have built some variation of SeeFood, an app that detects what food is in front of it inspired by Not Hotdog from Silicon Valley.

Those SeeFood variants mostly use the InceptionV3 model provided by Apple; no data science knowledge required!