





In iOS 11 Apple provide frameworks for specific areas. We will dive into Vision API. Using Vision framework tools we can process image or video to detect and recognize face, detect barcode, detect text, detect and track object, etc.



For detecting objects using Machine Learning Image Analysis follow this link - Real Time Camera Object Detection with Machine Learning - CoreML: Swift 4





In this article, we will bash out face detection. In vision API there are three roles.

Getting Started:

1. Request :

Ex: VNDetectFaceRectanglesRequest to detect face in an image.

2. Request handler :

Ex: VNImageRequestHandler , VNSequenceRequestHandler . VNImageRequestHandler for single image and VNSequenceRequestHandler is for a sequence of multiple images.

3. Observation :

Provide informations like bounding box.

First create a new project -> open Xcode -> File -> New -> Project -> Single View App, then tap next button. Type product name as 'Face Recognization' then tap next and select the folder to save project.



Before start, download one sample image with faces and add to Assests.xcassets and name it as 'sample1'.



Now it's time to start writing a code. Open ViewController . swift add this line in viewDidLoad () method.

guard let image = UIImage(named: "sample1") else { return }

Here image is an UIImage object used to detect faces. Then for displaying image add UIImageView as subview. Add the following code in viewDidLoad () method after else part.

/........ let scaledHeight = view.frame.width / image.size.width * image.size.height let imageView = UIImageView(image: image) imageView.frame = CGRect(x: 0, y: 20, width: view.frame.width, height: scaledHeight) view.addSubview(imageView)

Here scaledHeight is the imageView height calculated from ratio of device width and image size.



Now Build and Run , You will see image with aspect size based on device size.

We start with Vision API to detect face rectangles. For that we need to import vision add below 'import UIKit'.

import Vision

As we mentioned earlier we are using three roles in this face detection. First we are going to create request using VNDetectFaceRectanglesRequest . Second step is to use request handler in this we are analyzing single image so we will use VNImageRequestHandler . This is an asynchronous so it's better to put VNImageRequestHandler in background thread. Third one Observations, we will use inside request completion handler. Let implement everything using code. Add the following code to the end of viewDidLoad() method.

/........ let request = VNDetectFaceRectanglesRequest { (req, error) in if let error = error { print("Failed to detect faces",error) return } print(req.results) } guard let cgImage = image.cgImage else { return } DispatchQueue.global(qos: .background).async { let handler = VNImageRequestHandler(cgImage: cgImage, options: [:]) do { try handler.perform([request]) } catch let reqError { print("Error in req",reqError) } }





Execution starts from VNDetectFaceRectanglesRequest after that it will not call completion handler. Then the flow goes to cgImage -> handler then we will perform request on handler. If the request succeded then it calls VNDetectFaceRectanglesRequest completion handler. It will analyze image and give results as array of VNFaceObservation .





Now Build and Run , Great You will see VNFaceObservation object in console.

Finally, we are getting rectangle. Parse VNFaceObservation to draw rectangles on detected faces in an image. For that copy the following lines of code and replace the line

'print(req.results)'.

guard let observations = req.results as? [VNFaceObservation] else { fatalError("unexpected result type") } observations.forEach({ (observation) in DispatchQueue.main.async { print(observation.boundingBox) let x = self.view.frame.width * observation.boundingBox.origin.x let width = self.view.frame.width * observation.boundingBox.size.width let height = scaledHeight * observation.boundingBox.size.height let y = scaledHeight * (1 - observation.boundingBox.origin.y) - height let redSquare = UIView() redSquare.backgroundColor = UIColor.clear redSquare.layer.borderColor = UIColor.red.cgColor redSquare.layer.borderWidth = 2.0 redSquare.frame = CGRect(x: x, y: y, width: width, height: height) self.view.addSubview(redSquare) } })

Now Build and Run , Great detected faces in an image with red borders.