This tutorial is the 10th part in the series, ML Kit for Mobile Developers. If you’re not quite caught up, you can start here:

Series Pit Stops

Introducing Firebase ML Kit Face Detection API

ML Kit’s Face detection API is one of the APIs that I left out from the ML Kit series (until now), mainly because I couldn’t find a proper use-case of this API.

But recently, as I started working on AfterShoot, I came to realize that this API can be used to effectively detect blinks from a picture, which is what this blog is going to be about!

The Face Detection API mainly takes in an image and scans it for any human faces that are present, which is different than facial recognition (in which Big Brother is infamously watching you while you sleep). It’s also an on-device API, which means that all the ML inferencing happens on your device and no personal data is sent to a third-party vendor.

For each face it detects, the API returns the following parameters :

Coordinates of a rectangular bounding box around that face Coordinates of a rectangular bounding box around the eyes, nose, and mouth for that face The probability that the face in the picture is smiling The likelihood that the eyes in that face are open or closed A face contour outlining the face, eyes, nose, and mouth

You can read more about the API here :

In this blog pose, we’ll be looking at how we can use this library to mainly track whether a person captured in the live camera preview has their eyes opened or closed.

This can be used in a car monitoring system, for instance, to detect if the driver is feeling sleepy or otherwise distracted. This system could then alert them if their eyes are closed for a prolonged period!

Before we go ahead, here are some screenshots from the app :