Share

The Shape Detection API: a picture is worth a thousand words, faces, and barcodes The Shape Detection API detects faces, barcodes, and text in images. Jan 7, 2019 • Updated Aug 10, 2020 Thomas Steiner Twitter

GitHub

Glitch

Blog

This API is part of the new capabilities project. Barcode detection has launched in Chrome 83 on certified devices with Google Play Services installed. Face and text detection are available behind a flag. This post will be updated as the Shape Detection API evolves.

What is the Shape Detection API? #

With APIs like navigator.mediaDevices.getUserMedia and the Chrome for Android photo picker, it has become fairly easy to capture images or live video data from device cameras, or to upload local images. So far, this dynamic image data—as well as static images on a page—has been not been accessible by code, even though images may actually contain a lot of interesting features such as faces, barcodes, and text.

For example, in the past, if developers wanted to extract such features on the client to build a QR code reader, they had to rely on external JavaScript libraries. This could be expensive from a performance point of view and increase the overall page weight. On the other hand, operating systems including Android, iOS, and macOS, but also hardware chips found in camera modules, typically already have performant and highly optimized feature detectors such as the Android FaceDetector or the iOS generic feature detector, CIDetector .

The Shape Detection API exposes these implementations through a set of JavaScript interfaces. Currently, the supported features are face detection through the FaceDetector interface, barcode detection through the BarcodeDetector interface, and text detection (Optical Character Recognition, (OCR)) through the TextDetector interface.

Caution: Text detection, despite being an interesting field, is not considered stable enough across either computing platforms or character sets to be standardized at the moment, which is why text detection has been moved to a separate informative specification.

Suggested use cases #

As outlined above, the Shape Detection API currently supports the detection of faces, barcodes, and text. The following bullet list contains examples of use cases for all three features.

Face detection #

Online social networking or photo sharing sites commonly let their users annotate people in images. By highlighting the boundaries of detected faces, this task can be facilitated.

Content sites can dynamically crop images based on potentially detected faces rather than relying on other heuristics, or highlight detected faces with Ken Burns-like panning and zooming effects in story-like formats.

Multimedia messaging sites can allow their users to overlay funny objects like sunglasses or mustaches on detected face landmarks.

Barcode detection #

Web applications that read QR codes can unlock interesting use cases like online payments or web navigation, or use barcodes for establishing social connections on messenger applications.

Shopping apps can allow their users to scan EAN or UPC barcodes of items in a physical store to compare prices online.

Airports can provide web kiosks where passengers can scan their boarding passes' Aztec codes to show personalized information related to their flights.

Text detection #

Online social networking sites can improve the accessibility of user-generated image content by adding detected texts as alt attributes for <img> tags when no other descriptions are provided.

attributes for tags when no other descriptions are provided. Content sites can use text detection to avoid placing headings on top of hero images with contained text.

Web applications can use text detection to translate texts such as, for example, restaurant menus.

Current status #

Step Status 1. Create explainer Complete 2. Create initial draft of specification In Progress 3. Gather feedback & iterate on design In progress 4. Origin trial Complete 5. Launch Barcode detection Complete

Face Detection In Progress

Text Detection In Progress

How to use the Shape Detection API #

Warning: So far only barcode detection is available by default, starting in Chrome 83 on certified devices with Google Play Services installed, but face and text detection are available behind a flag. You can always use the Shape Detection API for local experiments by enabling the #enable-experimental-web-platform-features flag.

If you want to experiment with the Shape Detection API locally, enable the #enable-experimental-web-platform-features flag in chrome://flags .

The interfaces of all three detectors, FaceDetector , BarcodeDetector , and TextDetector , are similar. They all provide a single asynchronous method called detect() that takes an ImageBitmapSource as an input (that is, either a CanvasImageSource , a Blob , or ImageData ).

For FaceDetector and BarcodeDetector , optional parameters can be passed to the detector's constructor that allow for providing hints to the underlying detectors.

Please carefully check the support matrix in the explainer for an overview of the different platforms.

Gotchas! If your ImageBitmapSource has an effective script origin which is not the same as the document's effective script origin, then attempts to call detect() will fail with a new SecurityError DOMException . If your image origin supports CORS, you can use the crossorigin attribute to request CORS access.

Working with the BarcodeDetector #

The BarcodeDetector returns the barcode raw values it finds in the ImageBitmapSource and the bounding boxes, as well as other information like the formats of the detected barcodes.

const barcodeDetector = new BarcodeDetector ( {





formats : [

'aztec' ,

'code_128' ,

'code_39' ,

'code_93' ,

'codabar' ,

'data_matrix' ,

'ean_13' ,

'ean_8' ,

'itf' ,

'pdf417' ,

'qr_code' ,

'upc_a' ,

'upc_e'

]

} ) ;

try {

const barcodes = await barcodeDetector . detect ( image ) ;

barcodes . forEach ( barcode => searchProductDatabase ( barcode ) ) ;

} catch ( e ) {

console . error ( 'Barcode detection failed:' , e ) ;

}

Working with the FaceDetector #

The FaceDetector always returns the bounding boxes of faces it detects in the ImageBitmapSource . Depending on the platform, more information regarding face landmarks like eyes, nose, or mouth may be available. It is important to note that this API only detects faces. It does not identify who a face belongs to.

const faceDetector = new FaceDetector ( {





maxDetectedFaces : 5 ,





fastMode : false

} ) ;

try {

const faces = await faceDetector . detect ( image ) ;

faces . forEach ( face => drawMustache ( face ) ) ;

} catch ( e ) {

console . error ( 'Face detection failed:' , e ) ;

}

Working with the TextDetector #

The TextDetector always returns the bounding boxes of the detected texts, and on some platforms the recognized characters.

Caution: Text recognition is not universally available.

const textDetector = new TextDetector ( ) ;

try {

const texts = await textDetector . detect ( image ) ;

texts . forEach ( text => textToSpeech ( text ) ) ;

} catch ( e ) {

console . error ( 'Text detection failed:' , e ) ;

}

Feature detection #

Purely checking for the existence of the constructors to feature detect the Shape Detection API doesn't suffice, as Chrome on Linux and Chrome OS currently still expose the detectors, but they are known to not work (bug). As a temporary measure, we instead recommend a defensive programming approach by doing feature detection like this:

const supported = await ( async ( ) => 'FaceDetector' in window &&

await new FaceDetector ( ) . detect ( document . createElement ( 'canvas' ) )

. then ( _ => true )

. catch ( e => e . name === 'NotSupportedError' ? false : true ) ) ( ) ;

Best practices #

All detectors work asynchronously, that is, they do not block the main thread. So don't rely on realtime detection, but rather allow for some time for the detector to do its work.

If you are a fan of Web Workers, you'll be happy to know that detectors are exposed there as well. Detection results are serializable and can thus be passed from the worker to the main app via postMessage() . The demo shows this in action.

Not all platform implementations support all features, so be sure to check the support situation carefully and use the API as a progressive enhancement. For example, some platforms might support face detection per se, but not face landmark detection (eyes, nose, mouth, etc.); or the existence and the location of text may be recognized, but not text contents.

Caution: This API is an optimization and not something guaranteed to be available from the platform for every user. Developers are expected to combine this with their own image recognition code and take advantage of the platform optimization when it is available.

The Chrome team and the web standards community want to hear about your experiences with the Shape Detection API.

Tell us about the API design #

Is there something about the API that doesn't work like you expected? Or are there missing methods or properties that you need to implement your idea? Have a question or comment on the security model?

File a spec issue on the Shape Detection API GitHub repo, or add your thoughts to an existing issue.

Problem with the implementation? #

Did you find a bug with Chrome's implementation? Or is the implementation different from the spec?

File a bug at https://new.crbug.com. Be sure to include as much detail as you can, simple instructions for reproducing, and set Components to Blink>ImageCapture . Glitch works great for sharing quick and easy repros.

Planning to use the API? #

Planning to use the Shape Detection API on your site? Your public support helps us to prioritize features, and shows other browser vendors how critical it is to support them.

Share how you plan to use it on the WICG Discourse thread

Send a Tweet to @ChromiumDev with #shapedetection and let us know where and how you're using it.

Helpful links #