By Gregory Barber

A bill approved by the state senate would set a three-year moratorium on police use of recognition algorithms. Privacy advocates want a permanent ban.

Last month, members of the California legislature were subject to a surveillance experiment, courtesy of the American Civil Liberties Union. Their portraits were fed into Amazon’s Rekognition facial recognition software and compared with a database of 25,000 arrest mug shots. Twenty-six lawmakers were incorrectly identified as matches. The would-be suspects included Assemblyman Phil Ting, a Democrat from San Francisco. He hoped it would drum up support for his bill, AB 1215, to ban facial recognition from police body cameras.

On Wednesday, the state senate passed a slightly different bill—not a ban but a moratorium that expires in three years. The change came just ahead of the deadline to make amendments before the session ends this week. Some privacy advocates worry that the bill’s expiration date will give companies, many of which acknowledge the limitations of their technology, time to improve their algorithms and win over skeptics. In three years, if the ACLU’s test is replayed, will the facial recognition companies pass it?

The bill, which needs approval by the state Assembly and the governor’s signature to become law, has been celebrated by the ACLU as an positive step. Matt Cagle, an attorney at the ACLU of Northern California, says that body cameras, which have been touted as tools for accountability after shootings of unarmed people of color, are poised to turn into tools of surveillance instead. “It’s a bait and switch,” he says. The bill would ban the use of facial recognition algorithms in real time, when the body cameras are rolling, and in subsequent forensic analysis of footage. It carves out an exemption for algorithms that detect and redact faces from body camera footage, so that the rules don’t slow public records requests.

The moratorium comes amid growing concerns about facial recognition in public spaces. Cities including San Francisco and Oakland have passed broader bans on government use of facial recognition, and Massachusetts is considering a statewide moratorium. The bills have been driven by concerns about privacy and bias that some argue are inherent, but also technical shortcomings that have led even companies developing the technology to say it isn’t ready for prime time.

Last spring, Microsoft said it had refused to sell its facial recognition software to an unnamed California police agency. In June, Axon, the largest supplier of body cameras to law enforcement, said it wouldn’t include facial recognition in its product, on the recommendation of its external ethics board. In part, it was a recognition that the technology simply doesn’t work well enough—at least not yet. While facial recognition has historically been used to match faces on clear, forward-facing images—say, comparing a mug shot to a database of prior arrests—that’s much more difficult to do in real time. Officers often find themselves in situations involving bad lighting, tricky angles, or quick motion. Axon has left open the possibility that it could pursue facial recognition technology in the future.

Companies like Amazon have argued that facial recognition should be regulated, not banned. The company pushed back on the ACLU’s August experiment, saying the bad matches would not have happened if the ACLU had required 99 percent probability for a match. (The ACLU said it had used “factory standards” for the test.) The Information Technology and Innovation Foundation, an industry group that receives support from companies including Microsoft and Amazon, opposes AB 1215, arguing that the technology could counter biases by humans reviewing footage.

The most vocal opposition, however, has been from police groups who say it strips them of a key piece of technology for public safety. The bill “erroneously presumes that persons in public possess or are afforded a reasonable expectation of privacy,” the Riverside Sheriffs’ Association wrote in an analysis of the bill.

The switch from a ban to a moratorium, according to Ting, came out of concerns from lawmakers who “wanted to revisit the issue as the technology improves.” He says a moratorium strikes the proper balance, giving officials and technologists more time and flexibility. “If you were going to deploy cameras all over a particular city, you would have a significant public process. Right now law enforcement can do that without a public process and have those cameras roving around.”

But Jeremy Gillula, a technologist for the Electronic Frontier Foundation who serves on Axon’s ethics board, worries that the sunset date could set up “a flood of facial recognition in bodycams” if the state fails to extend the moratorium. “Right now, it’s an easier argument to say this technology shouldn’t be deployed because it has all sorts of flaws,” Gillula says, emphasizing that he is speaking personally, and not for EFF. “I definitely worry that in three years the companies working on bodycams could say we’ve solved these flaws.”

In that case, opponents of facial recognition will need to rely more on ethical arguments—about surveillance and the right to privacy in public spaces—that might be a harder sell with lawmakers and the public. A recent Pew poll found that, despite recent bans and controversies, a majority of Americans trust police use of the technology. For his part, Gillula says he plans to keep arguing against the use of facial recognition by law enforcement, if and when the state, or Axon, decides it’s ready. “Frankly I don’t want to live in a society where just by walking in front of a police officer there’s a record of where I’ve been.”