Massachusetts Institute of Technology facial recognition researcher Joy Buolamwini stands for a portrait behind a mask at the school, in Cambridge, Mass. Buolamwini’s research has uncovered racial and gender bias in facial analysis tools sold by companies such as Amazon that have a hard time recognizing certain faces, especially darker-skinned women. Photo : Steven Senne / AP

Facial recognition experts called to testify before a congressional hearing on Wednesday found themselves in broad agreement: Citing a litany of abuses, each pressed federal lawmakers to respond to the widespread, unregulated use of the technology by law enforcement at every level across the country. However, the idea that we should simply ban police from using this problematic and, at present, demonstrably racist technology was somehow not the only argument to be heard.


Some of the experts suggested in fact that, maybe, with the right amount of regulation, America wouldn’t devolve into a repressive police state just because its citizens have their faces scanned every time they step outside.

Witnesses before the House Committee on Oversight and Reform—attorneys, scientists, academics, and a noted law enforcement professional—spoke at length about the potential for abuse while offering a laundry list of real-world examples in which face recognition had already been used to trample the rights of U.S. citizens. With few exceptions, the technology, which each witness described as both deeply flawed and intrinsically biased, has been misemployed by the untrained police officers who utilize it—many of whom have concealed its use from defendants who are implicated in crimes by these defective algorithms.


“Using artificial intelligence to confer upon a highly subjective visual impression a halo of digital certainty is neither fact-based nor just,” said Dr. Cedric Alexander, a former chief of police and ex-security director for the Transportation Security Administration (TSA). “But,” he lamented, “it is not illegal.”

Despite offering some of the most powerful arguments in favor of at least temporarily prohibiting the use of the technology, noting for instance that there are no standards governing the kinds of images included in “any” agency’s face-recognition database—“Who is included in it? Who knows?”—Alexander would go on to disagree that it was time to “push the pause button” on face recognition, an opinion unanimously shared by his fellow experts called to testify.

Andrew Ferguson, a professor at the University of the District of Columbia, for example, did not mince words in his opening remarks: “Building a system with the potential to arbitrarily scan and identify individuals without any criminal suspicion and discover personal information about their location, interests or activities, can and should simply be banned by law.”

But in a room in which both Republicans and Democrats appeared to generally agree with the experts—that face recognition presents a clear and present threat to Americans’ civil rights and civil liberties—there were two conversations happening at once. Whereas Ferguson’s solution was to flat-out ban police algorithms from gaining unfettered access to the (oft-cited) “50 million” surveillance cameras across the country, other proposals suggested that, maybe, there does exist a future in which all Americans’ faces are surveilled, but only under a fair and well-regulated system, transparent and accountable to the people.


[Face recognition] should not merely be put “on pause” until Amazon figures out how to flawlessly identify residents of Black communities, where, one assumes, a majority of these AI-equipped cameras will inevitably be deployed.

The clearest example of this was in statements about the inherent biases of face recognition, which studies have repeatedly shown to be dramatically less accurate when it comes to faces of women and people of color. “Our faces may well be the final frontier of privacy,” testified Joy Buolamwini, founder of the Algorithmic Justice League, whose research on face recognition tools at M.I.T. Media Lab identified a 35-percent error rate for photos of darker skinned women, as opposed to database searches using photos of white men, which proved accurate 99 percent of the time.


“In one test, Amazon recognition even failed on the face of Oprah Winfrey labeling her male,” she said. “Personally, I’ve had to resort to literally wearing a white mask to have my face detected by some of this technology. Coding in white face is the last thing I expected to be doing at M.I.T., an American epicenter of innovation.”



The proven bias of facial recognition systems was presented by Buolamwini, among others, as one reason to declare a “moratorium” on the use of facial recognition; that is, to temporarily prohibit its use until the technology is improved, or “matured” as one witness put it.


A moratorium is both appropriate and necessary, testified Clare Garvie, the author of a recent study at the Georgetown Law Center on Privacy and Technology that found police had used look-alike celebrity photos and police sketches in attempts to identify suspects. She added, however: “It may be that we can establish common sense rules that distinguish between appropriate and inappropriate uses—uses that promote public safety and uses that threaten our civil rights and liberties.”

These arguments leave at least some wiggle room for lawmakers to entertain the notion that there’s a future in which an artificial intelligence designed for police use scans the faces of Americans whenever they leave their homes. It’s a vision of a police state that’s “good,” in that the police themselves are ethical and just because they’re held accountable by rules and regulations; a future in which police procedures are open and transparent, and defendants always get the full story about how they came under suspicion in the first place.


Ultimately, this is an absurd fantasy that ignores what is common knowledge about the history of abuses by U.S. law enforcement agencies over the relevant last half-century. In the last two years alone, an investigation by the Associated Press uncovered that police officers across the country had misused law enforcement databases “ to get information on romantic partners, business associates, neighbors, journalists and others for reasons that have nothing to do with daily police work...” The realization of this outspread abuse by police did not prompt Congress to take action. It did not even dissuade the type of police stalking that the reporters exposed.

A Florida police officer, it was reported this March, made “several hundred questionable database queries of women,” authorities said. At least 150 women were targeted. Employees at federal agencies whose work is highly classified were also found guilty of this behavior. A 2013 report by the National Security Agency’s Office of Inspector General, for example, detailed how one NSA employee—on his first day of work—“queried six e-mail addresses belonging to a former girlfriend, a U.S. person, without authorization.”


In each of these cases, there were already regulations on the books to prohibit the kind of abuse committed. They simply had no effect.

While the witnesses Wednesday entertained the notion that some legislative solution might exist that permits law enforcement’s use of face recognition, they also spelled out why any such solution would essentially be unconstitutional anyway. Congress might as well give police the ability to collect DNA, fingerprints, or cellphone location history on a whim, absent a subpoena, warrant, or court order of any kind.


“This power raises questions about our Fourth and First Amendment protections,” Garvie said. “Police can’t secretly fingerprint a crowd of people from across the street. They also can’t walk through that crowd demanding that everybody produce their driver’s license. But they can scan their faces remotely and, in secret, and identify each person thanks to face recognition technology.”

A face recognition program that presents no racial bias is still one of the creepiest uses of technology by law enforcement ever. It should not merely be put “on pause” until Amazon figures out how to flawlessly identify residents of Black communities, where, one assumes, a majority of these AI-equipped cameras will inevitably be deployed.