In the landscape of new technologies that are capable of revolutionizing our daily lives, few are as tantalizing as facial recognition technologies. With all the recent controversy around Clearview AI, people are paying more and more attention to the technology, and they’re also eager to understand the how the tech works, and it’s limitations. This article won’t cover this ethical issue, but I’ll try my hand at explaining some facial recognition and detection techniques.

In recent months, Apple has been pushing new features and major improvements for its Vision API, which is their main framework for all things related to computer vision. The Vision API allows for quick, easy, and intuitive camera sessions while offering a multitude of possibilities such as:

Native face detection API

Handle Core ML models for image processing (e.g. classification, object detection )

Barcode recognition

Text recognition

Since the most-used cameras are (by far) the ones we have in our pockets, this tutorial will be covering native mobile solutions for iOS.

I have included code in this article where it’s most instructive. Full code and data can be found on my GitHub page. Let’s get started