Getty images / Morsa Images / WIRED

UK police forces are increasingly experimenting with controversial new facial recognition (FR) technology for crowd control and locating suspects. Critics, however, have labeled the trials a shambles, pointing to the high error rate and even higher cost of the program.

Documents released under Freedom of Information Act requests have shown that collectively South Wales Police and London's Metropolitan Police have spent millions of pounds on trials of the technology, despite the fact that both systems have been shown to have an error rate over 90 per cent.


Similar trials around the world have raised concerns around the technology, including in San Francisco where privacy advocates are calling for a ban on the use of FR by law enforcement.

It’s not just the police who are interested in the potential use of FR. From shopping malls to sporting grounds, it is becoming more and more difficult for the average person to know when this technology is being used to track them, by whom and for what purposes. And while FR may be error-prone now, this is unlikely to stay the case for long.

Read next This CIA spy game reveals the secrets of successful teams This CIA spy game reveals the secrets of successful teams

How comfortable the public is with the use of FR in public spaces is likely to vary depending on the context. Many people may be fine with police using FR for crowd control at major events, for example, but not with the same technology being used to track them around the supermarket aisles in an attempt to up-sell them on potatoes.

The role FR will play in societies will be decided after a broad and complex debate that's likely to take many years. The answer will almost certainly have to include some form of regulation to control how such a powerful technology is used.


But until the sticky wheels of regulation grind into gear, what are the options for people who want to walk around in public without being constantly identified?

“Depending on the kind of technology that's being used, you can attempt to deflect FR in many different ways," says pen tester and privacy advocate Lilly Ryan. “You really need to know what’s under the hood to know what is most likely to work, and it can be very hard for the average person to know what kind of FR is being used on them at any particular time.”

Most research on FR systems is conducted under lab conditions, in which researchers know exactly what kind of FR system they’re working with and often also have access to the underlying code and even the training data, giving them a huge head-start in fooling the system which they would be unlikely to have ‘in the wild’. In the real world, FR is also often combined with other biometrics such as fingerprints or gait analysis. The introduction of increasingly powerful AI techniques has also provided a huge boost to the field.

Read next Covid-19 has shown how easy it is to automate white-collar work Covid-19 has shown how easy it is to automate white-collar work

"The progress achieved in FR after the integration of deep learning is exponential," says Christoph Busch of the Norwegian University of Science and Technology. Busch and his colleague Raghavendra Ramachandra have studied FR systems extensively, including surveying known ways to fool the technology.


Faced with sufficiently sophisticated systems, the reality is that there are no truly guaranteed methods of avoiding identification. However, many FR systems actually in use today are not all that sophisticated, and researchers and privacy advocates are finding ways to beat the technology.

Occlusion and confusion

A bit extreme? Suzanne Waijers

Techniques for fooling FR can be roughly divided into two categories: occlusion or confusion.

Occlusion techniques work by physically hiding facial features so the camera simply can’t see them. How successful these methods are will depend on which bits of your face are hidden and how well hidden they are.

Read next Deepfakes are getting cheaper, easier and way more convincing Deepfakes are getting cheaper, easier and way more convincing

For example, a balaclava which leaves the most important facial features exposed – the eyes, the mouth, the nose – may not actually do much to prevent a person from being identified. Researchers have found that by using a deep learning framework trained on 14 key facial points, they were able to accurately identify partially-occluded faces most of the time. This includes wearing glasses, scarves, hats or fake beards,

If you really want to take it to the extreme you can skip the ski mask and go straight to a 3D-printed model of someone else's face. The deeply unnerving URME Personal Surveillance Identity Prosthetic is a 3D scan of the artist Leo Selvaggio's face, right down to his hair and skin texture.

If, on the other hand, a more or less literal tinfoil hat is your thing then Project Kovr, a jacket which zips up over your head and looks like it could double as a spacecraft may be the option for you.

However, while hiding your entire face may be more effective for preventing FR systems from identifying you, it’s the exact opposite of inconspicuous. Completely obscuring your face is illegal in many places, including many places in Europe, Canada and the United States.

Even where it may be technically allowed, it’s hard to think of a quicker way of attracting attention from the people around you – not to mention the police – than walking down the street looking like you’ve just escaped from a sci-fi dystopia.

Read next Police built an AI to predict violent crime. It was seriously flawed Police built an AI to predict violent crime. It was seriously flawed

Confounding the computer

Researchers at Fudan University added infrared lights to a baseball cap. When shone on the face, they confused FR systems Fudan University

So if occlusion is uncertain at best and liable to get you locked up at worst, that leaves confusion. One of the most straightforward techniques is to stop the FR system working is to make it think it isn't looking at a face.

“If you’re attacking the facial detection stage, you could try and break up the lines of your face to try and stop it from being detected by the system in the first places,” Ryan says.

This is the idea behind CV Dazzle, which uses extreme makeup and hairstyles to bewilder computer vision systems. This technique (and other forms of extreme makeup, like Juggalo makeup) confuse computer vision systems by playing with light and darkness in a way which makes a face look – to a computer – like it’s not a face.

"From an academic research perspective, the 'makeup attack' is gaining more attention. However, this kind of attack demands good make up skills in order to be successful," Busch observes. Just applying makeup at random is unlikely to be enough – key facial points need to be obscured in specific ways in order to fool the system.

Read next How do you control an AI as powerful as OpenAI's GPT-3? How do you control an AI as powerful as OpenAI's GPT-3?

Dazzle methods will only work on systems which rely on visible light, however (and like some of the occlusion methods, it is likely to get you a lot of attention when you're walking around in public). That means it’s not applicable to more sophisticated systems like Apple’s FaceID, which use infrared light rather than visible light.

“They bounce the beams of [infrared] light back off your face to create a 3D map of your face. That can make it even harder for people to avoid detection because it’s not just relying on the way that you appear, but also the contours of your face,” Ryan says. “Which is really useful when it comes to say, the iPhone not being fooled by a flat printed image because you can’t get any contours off it.”

Infrared-based systems may see through techniques like CV Dazzle, but they are vulnerable to other forms of interference. It’s possible to fight infrared with infrared.

In 2018, researchers from Fudan University in China, the Chinese University of Hong Kong, Indiana University, and Alibaba Inc., used an array of tiny infrared LEDs wired to the inside of a baseball cap to project dots of light onto the wearer’s face. These dots were invisible to the human eye, but confused the computer vision and made the face unidentifiable.

The researchers found that they couldn’t just hide the wearer’s identity, however – they could also make the computer think they were someone else altogether. Using the LEDs, in 70 per cent of tests the researchers were able to trick the FR system into thinking their colleague was Moby.

Read next Improbable’s simulation tech could help us build better pandemic models Improbable’s simulation tech could help us build better pandemic models

Exploiting expectations

In order to see this embed, you must give consent to Social Media cookies. Open my cookie preferences. Cyberpunk is Now: Anti-surveillance clothing - Max-Planck Institute found that only 10 fully visible examples of a face were needed to identify a blurred image with 91.5 % accuracy. Projects like Hyperface are working on clothing patterns like the image to counter the technology pic.twitter.com/R9tavheeYT — ΜΔDΞRΔS (@hackermaderas) December 11, 2018

If makeup and LEDs are about disguise, HyperFace camouflage is about distraction. Instead of trying to stop the system from detecting a face at all, the goal is to overwhelm it by making it see way, way too many faces. The pattern can be printed onto scarves or earrings, or anything which can be worn close to a person's real face.

FR systems detect faces based on specific patterns of light and darkness. What the HyperFace camouflage does is mimic those patterns of light and dark in a way which looks like a face to computer vision, but not to human eyes. The goal of HyperFace is to make your real face a needle in a haystack for FR systems, whilst being relatively inconspicuous (beyond just wearing a cool scarf) to the people around you.

“In other words, if a computer vision algorithm is expecting a face, exploit its expectations,” write the creators of HyperFace. At this stage, however, the project is still only a prototype.

Built-in bias

Intentionally trying to fool the technology is one thing, but being the subject of an unintended error is quite another. Many FR systems tested in the real world have proven to be stunningly inaccurate. 92 per cent of matches made by the system trialled by police during the UEFA Champions League Final week in Wales in 2017 turned out to be wrong, for example.

Read next As humans go home, Facebook and YouTube face a coronavirus crisis As humans go home, Facebook and YouTube face a coronavirus crisis

Not only are FR systems often inaccurate, but they can be biased along racial and gender lines. Researchers such as MIT Media Lab’s Joy Buolamwini have called out companies for failing to train their technology to see diverse faces.

“Unfortunately it turns out that a lot of the ways to confuse FR are to be in a demographic who are not highly represented in the data which these systems are trained on, which quite often ends up being people who are not light-skinned, or not male presenting,” says Ryan.

“So part of it is that, sure, trying to trick this technology is fun, but there are also very serious consequences for being mistaken for somebody else, particularly in a law enforcement context.”

But things will change in the long term. The high levels of inaccuracy in FR trials are unlikely to last long. “Emerging deep learning techniques together with the availability of the large-scale face images through social media, and the progress in computational resources through GPUs has significantly improved the recognition performance of face recognition system,” Busch and Ramachandra say.

The technology is developing particularly rapidly thanks to enormous resources being thrown at it by both governments and private corporations. They’re not spending that money for the sake of scientific endeavour – they’re investing in the technology because they plan to use it.

Read next DeepMind's AI is getting closer to its first big real-world application DeepMind's AI is getting closer to its first big real-world application

As the use of FR becomes more prevalent in the places we move through in our daily lives, the concerns associated with it become sharper. Any method for evading recognition is at best a stopgap solution, particularly when FR is combined with other forms of biometric identification.

The real solution to the issues around FR? The tech community working with other industries and sectors to strike an appropriate balance between security and privacy in public spaces.

More great stories from WIRED

– Why your standing desk isn't solving your sitting problem

– Our guide to the best WhatsApp alternatives

– What is the point of folding phones?


– The complicated truth about China's social credit system

– Your old router is a goldmine for hackers

Get the best of WIRED in your inbox every Saturday with the WIRED Weekender newsletter