But as with any emerging technology, facial recognition is far from perfect. Companies market facial recognition technology as “a highly efficient and accurate tool” with “an identification rate above 95 percent.” In reality, these claims are almost impossible to verify. The facial-recognition algorithms used by police are not required to undergo public or independent testing to determine accuracy or check for bias before being deployed on everyday citizens. More worrying still, the limited testing that has been done on these systems has uncovered a pattern of racial bias.

The National Institute of Standards and Technologies (NIST) conducts voluntary tests of facial-recognition vendors every four years. In 2010, NIST observed that accuracy rates had improved tenfold between each round of testing, a dramatic testament to the technology’s rapid advances.

But research suggests that the improving accuracy rates are not distributed equally. To the contrary, many algorithms display troubling differences in accuracy across race, gender, and other demographics. A 2011 study, co-authored by one of the organizers of NIST’s vendor tests, found that algorithms developed in China, Japan, and South Korea recognized East Asian faces far more readily than Caucasians. The reverse was true for algorithms developed in France, Germany, and the United States, which were significantly better at recognizing Caucasian facial characteristics. This suggests that the conditions in which an algorithm is created—particularly the racial makeup of its development team and test photo databases—can influence the accuracy of its results.

Similarly, a study conducted in 2012 that used a collection of mug shots from Pinellas County, Florida to test the algorithms of three commercial vendors also uncovered evidence of racial bias. Among the companies evaluated was Cognitec, whose algorithms are used by police in California, Maryland, Pennsylvania, and elsewhere. The study, co-authored by a senior FBI technologist, found that all three algorithms consistently performed 5-to-10 percent worse on African Americans than on Caucasians. One algorithm, which failed to identify the right person in 1 out of 10 encounters with Caucasian subjects, failed nearly twice as often when the photo was of an African American.

This bias is particularly unsettling in the context of the vast racial disparities that already exist in police traffic stop, stop and frisk, and arrest rates across the country. African Americans are at least twice as likely to be arrested as members of any other race in the United States and, by some estimates, up to 2.5 times more likely to be targeted by police surveillance. This overrepresentation in both mug shot databases and surveillance photos will compound the impact of that 5-to-10 percent difference in accuracy rates. In other words, not only are African Americans more likely to be misidentified by a facial-recognition system, they’re also more likely to be enrolled in those systems and be subject to their processing.