Amazon touts its Rekognition facial recognition system as “ simple and easy to use,” encouraging customers to “detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.” And yet, in a study released Thursday by the American Civil Liberties Union, the technology managed to confuse photos of 28 members of Congress with publicly available mug shots. Given that Amazon actively markets Rekognition to law enforcement agencies across the US, that’s simply not good enough.

The ACLU study also illustrated the racial bias that plagues facial recognition today. "Nearly 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress," wrote ACLU attorney Jacob Snow. “People of color are already disproportionately harmed by police practices, and it’s easy to see how Rekognition could exacerbate that."

Facial recognition technology’s difficulty detecting darker skin tones is a well-established problem. In February, MIT Media Lab’s Joy Buolamwini and Microsoft’s Timnit Gebru published findings that facial recognition software from IBM, Microsoft, and Face++ have a much harder time identifying gender in people of color than in white people. In a June evaluation of Amazon Rekognition, Buolamwini and Inioluwa Raji of the Algorithmic Justice League found similar built-in bias. Rekognition managed to even get Oprah wrong.

“Given what we know about the biased history and present of policing, the concerning performance metrics of facial analysis technology in real-world pilots, and Rekognition’s gender and skin-type accuracy differences,” Buolamwini wrote in a recent letter to Amazon CEO Jeff Bezos, “I join the chorus of dissent in calling Amazon to stop equipping law enforcement with facial analysis technology.”

'We wouldn’t find this acceptable in any other setting. Why should we find it acceptable here?' Alvaro Bedoya, Center on Privacy and Technology

Yet Amazon Rekognition is already in active use in Oregon’s Washington County. And the Orlando, Florida police department recently resumed a pilot program to test Rekognition’s efficacy, although the city says that for now, “no images of the public will be used for any testing—only images of Orlando police officers who have volunteered to participate in the test pilot will be used.” Those are just the clients that are public; Amazon declined to comment on the full scope of law enforcement’s use of Rekognition.

For privacy advocates, though, any amount is too much, especially given the system’s demonstrated bias. “Imagine a speed camera that wrongly said that black drivers were speeding at higher rates than white drivers. Then imagine that law enforcement knows about this, and everyone else knows about this, and they just keep using it,” says Alvaro Bedoya, executive director of Georgetown University’s Center on Privacy and Technology. “We wouldn’t find this acceptable in any other setting. Why should we find it acceptable here?”

Amazon takes issue with the parameters of the study, noting that the ACLU used an 80 percent confidence threshold; that’s the likelihood that Rekognition found a match, which you can adjust according to your desired level of accuracy. “While 80 percent confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media use cases, it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty,” the company said in a statement. “When using facial recognition for law enforcement activities, we guide customers to set a threshold of at least 95 percent or higher.”

While Amazon says it works closely with its partners, it’s unclear what form that guidance takes, or whether law enforcement follows it. Ultimately, the onus is on the customers—including law enforcement—to make the adjustment. An Orlando Police Department spokesperson did not know how the city had calibrated Rekognition for its pilot program.

The ACLU counters that 80 percent is Rekognition’s default setting. And UC Berkeley computer scientist Joshua Kroll, who independently verified the ACLU’s findings, notes that if anything, the professionally photographed, face-forward congressional portraits used in the study are a softball compared to what Rekognition would encounter in the real world.