Step 2: Create a new dataset for object detection

Once in the Console, open the sidebar on the right and navigate to the very bottom till you find the Vision tab. Click on that.

Once here, click on the Get Started button displayed on the Object Detection card. Make sure that your intended project is being displayed on the top dropdown box:

You might be prompted to enable the AutoML Vision APIs. You can do so by clicking the Enable API button displayed on the page.

Lastly, click on the New Dataset button, give it a proper name, and then select Object Detection in the model objectives.

Step 3: Importing images

Once the dataset is created, you’ll be asked to upload some images to be used in the training process, along with the location of the Cloud Storage Bucket used to store those images.

Since this model will be used to detect humans, I’ve already prepared a dataset containing images of humans. If you want to prepare your own dataset easily, I’ve explained how to do so in a previous blog post, which you can find here:

Once you have your dataset prepared locally, click on Select Images and choose all the images that you need your model to be trained on. After this, click on the Browse button next to the GCS path and select the folder named your-project-name.appspot.com :

Once done, press Continue and wait for the import process to complete.

Step 4: Outline objects to be identified

Once the importing is finished, click on the Images tab and you should see all your imported images there. From here, click on the Add New Label button and name the label(s) that you want to identify in the image. I personally want to identify humans from a picture, so I will create a single label called “human”:

Once you’ve created a label, the next step is to teach the model what a human looks like! To do this, open the imported images and start drawing bounding boxes around humans in each image:

Don’t forget to save after drawing the box around the object that’s to be detected

You need to do this for at least 10 images, but Google recommends that you do it for 100 images to have better accuracy. Once you have enough images labeled, we then move on to the next and final step.

Step 5: Training the TensorFlow Lite model

Once you’ve annotated enough images, it’s time to train the model. To do this, head over to the Train tab and click the button that says Start Training.

You will be prompted with a few choices once you click on this button. Make sure that you select Edge in the first choice as opposed to Cloud-Based if you want tflite models that you can run locally.

Make sure that you select “Edge” if you want a tflite file as an output.

Press Continue and choose how accurate you need the model to be. Note that more accuracy means a slower model and vice versa:

Lastly, select your preferred budget and begin training. For Edge-based models, the first 15 hours of training is free of cost :

Once model training has started, sit back and relax! You’ll get an email once your model has been trained.