“Real-time face recognition across tens of millions of faces, and detection of up to 100 faces in challenging crowded photos.”

That was how Amazon described its facial-detection Rekognition technology, months before it found itself at the heart of the dispute around police surveillance in the United States.

Documents obtained by the American Civil Liberties Union (ACLU) revealed that Amazon had been licensing its powerful facial-recognition system to police in Orlando, Florida, and Washington County, Oregon, enabling authorities to identify people against a database of tens of millions of faces. In the case of Orlando, this technology had scope for real-time tracking, analyzing feeds from several cameras across the city. Amazon’s marketing material for Rekognition had promoted the use of its detection system in conjunction with police officer’s body cameras.

In the wake of the revelations, Orlando Police Chief John Mina attempted to smooth concerns, emphasizing that plans for drawing footage from body cameras into an Amazon-powered surveillance system were in their infancy: “We would never use this technology to track random citizens, immigrants, activists, or people of color,” Mina assured at a press conference. “The pilot program is just us testing this technology out to see if it even works.”

Yet concerns over the misuse of these surveillance technologies are building on a global scale. The combination of artificial intelligence and surveillance is problematic on both a practical and ethical level, provoking alarm from high office about the culpability of tech companies and law enforcement over these powerful tools.

At the same time, guerrilla resistance is emerging. To bend a quote by Paul Virilio: