1. Prepare the image data

Before we can train the recognizers we have to collect some image data with faces. If you are as excited about The Walking Dead as I am, then you are probably familar with our test subjects. I collected 4 images of Daryl, Rick and evil boy Negan, 12 in total.

As I simply picked some images from the web, we have to extract a subimage centered at the face of the character shown in each image. Therefore we will detect the characters’ face using OpenCVs’ CascadeClassifier class:

The CascadeClassifier can be used for object detection and is created from an xml file containing the representation of a trained model. OpenCV provides some pre-trained models for different use cases such as face detection, eye detection, full body detection and others. To detect the faces we will use the HAAR_FRONTALFACE_ALT2 model. Given a gray scale image, detectMultiScale will return the bounding rectangles of potential faces in the image. We can simply take the first best detection result and return the subimage covered by the rectangle.

The images are labeled with daryl<n>, rick<n>, negan<n>, n ranging from 1–4. We will read the images and split them into a set of training and test samples as follows:

This will give us the following face images:

Resizing the images is necessary, as the recognizers expect the data to be equally sized images. We will use the first 3 images of each character for training and the 4th one to test the recognizers (lines 19–24). Finally we have to label the data(lines 26-28). To train a recognizer we need to give it an array of images (trainImages) and an array holding the corresponding labels as numbers (labels). The data should look somehow like this:

TrainImages:

[Rick1, Rick2, Rick3, Daryl1, Daryl2, Daryl3, Negan1, Negan2, Negan3]

Labels:

[0, 0, 0, 1, 1, 1, 2, 2, 2]

2. Training the recognizers

Now that we have the data prepared, we will initialize the recognizers and train them:

You can also pass some parameters to the constructors of the recognizers to fine-tune them, but for the sake of simplicity we will go with the default settings. Logically, the train method expects the trainImages and labels arrays to be of the same length and the labels array has to contain atleast 2 different labels.

3. Recognizing the faces

That’s it! We can now run the prediction of our test images:

Running the example should give us the following output:

eigen:

predicted daryl to be: daryl, confidence: 1245.68

predicted negan to be: negan, confidence: 2247.25

predicted rick to be: negan, confidence: 2502.47 fisher:

predicted daryl to be: daryl, confidence: 452.15

predicted negan to be: negan, confidence: 464.76

predicted rick to be: rick, confidence: 831.38 lbph:

predicted daryl to be: daryl, confidence: 108.37

predicted negan to be: negan, confidence: 119.33

predicted rick to be: rick, confidence: 105.65

Using only 3 images per class (character) we can already obtain pretty good results. Okay the eigen recognizer made a single mistake and assumed Ricks face to belong to Negan, but I think you get the idea how to implement face recognition with OpenCV. There is also a second face recognition example in the repo, which produces the result shown in the title image, just in case your interested.

If you liked this article feel free to clap and comment. I would also highly appreciate supporting the opencv4nodejs project by leaving a star on github. Furthermore feel free to contribute or get in touch if you are interested :).