The latest figures from the Met Police's deployment of facial-recognition cameras in the heart of London show the technology is pretty fscking inaccurate.

On February 27, for instance, the cameras scanned an estimated 8,600 faces in Oxford Circus, checking them against a watchlist of 7,292 people. The AI tech flagged eight as being possible matches; seven turned out to be false positives, five of whom were actually stopped by the cops and two dismissed as obvious errors. The remaining person turned out to be a true positive, and was intercepted by the British plod.

That's a false-positive rate of 87.5 per cent of those flagged up by the software. About two weeks earlier, in Stratford, the AI matched none of the 4,600 faces it scanned against its watchlist. On another day, the system broke, and no one was scanned.

These numbers were shared [PDF] on the police force's website this week.

“Live facial recognition (LFR) is an operational tactic that allows police to seek wanted people, or those who may pose a risk of harm to themselves or others,” a spokesperson told The Register. “It works by comparing the facial images of people entering, or within, a particular area, against a bespoke watchlist of people the Met has a policing need to locate.

London's Metropolitan Police flip the switch: Smile, fellow citizens... you're undergoing Live Facial Recognition READ MORE

“LFR is designed to support overt policing operations and will significantly increase our efficiency and operational effectiveness in finding persons who are wanted. When the LFR system generates an ‘alert’, the decision to engage a member of the public is always made by a police officer. LFR is an aid to human decision-making.”

Big Brother Watch, a non-profit activist group based in London campaigning against the use of facial-recognition technology, however, slammed the human decision-making aspect. “This blows apart the Met's defence that facial recognition surveillance is in any way proportionate or that the staggering inaccuracy is mitigated by human checks,” it said on Twitter. “This is a disaster for human rights, a breach of our most basic liberties, and an embarrassment for our capital city.”

The Met, for what it's worth, previously said images of individuals misidentified by LFR would be “automatically deleted.”

Meanwhile, in Scotland...

Met top cop Cressida Dick last month shot down critics, including Big Brother Watch, during a talk at the Royal United Services Institute, a security think tank. She argued facial-recognition technology was vital for tackling crime in the modern age.

“Right now the loudest voices in the debate seem to be the critics. Sometimes highly inaccurate or highly ill informed," she said. "I would say it is for critics to justify to the victims of those crimes why police should not be allowed to use tech lawfully and proportionally to catch criminals."

The police force’s love for live facial recognition appears is not shared north of the border. The Scottish Parliamentary's Justice Sub-Committee on Policing warned Police Scotland against using the technology since it was more likely to be inaccurate for “females, and those from black, Asian and ethnic minority communities.”

It is true that today's facial-recognition algorithms are crap at identifying women and people of color compared to white men. “Police Scotland is not using, trialing or testing live facial recognition technology,” Assistant Chief Constable Duncan Sloan, lead for Major Crime and Public Protection in Scotland, previously confirmed to The Register.

The Metropolitan Police, however, continues to deploy facial-recognition cameras across London despite disastrous results in previous trials. The Register understands the cameras were in operation in Westminster as well as Stratford and Oxford Circus last month. ®