In a tweet, the Met assured the public that any images obtained that don't trigger a potential alert are deleted immediately -- and that it's up to officers whether they decide to stop someone based on an alert or not. The technology operates from a standalone system, and isn't linked to any other imaging platforms, such as CCTV or bodycams.

Despite the Met's insistence that the technology can be used for good, however, some critics have lambasted LFR as ineffectual and in some cases, unlawful. In April 2019, for example, a report from the University of Essex found that the Met's LFR technology has an inaccuracy rate of 81 percent. The previous year, technology used by police in South Wales mistakenly identified 2,300 innocent people as potential criminals.

The Met's new endeavor is launched at a tumultuous time for facial recognition technology. Just last week the European Commission revealed it's considering a ban on the use of LFR in public areas for up to five years, while regulators figure out how to prevent the tech being abused. Meanwhile, privacy campaign group Big Brother Watch -- supported by more than 18 UK politicians and 25 additional campaign groups -- has called for a halt to adoption, citing concerns about implementation without proper scrutiny.