Recently, I wrote two articles about training object detection Core ML models for iOS devices using TensorFlow and Turi Create frameworks.

To train those models, I used a tool I’d built called MakeML. It allows you to easily create a dataset, label it, and start training. There’s no need to write code with MakeML, so every iOS developer can train an object detection machine learning model in a couple of hours. On paper, at least…

During the process of creating MakeML, and after talking to a bunch of users, I realized that 3 major bottlenecks stood in the way for developers when creating and integrating object detection into their apps:

Collecting and processing data to create a dataset. Setting up a training pipeline to train a model and then receive a model type that’s ready to run in their apps (e.g. Core ML or TF Lite). Understanding possible use cases in production apps.

Let’s take a closer look at each of these bottlenecks.