As the first state with with a law regulating how government agencies can use facial recognition software, Washington provides other states with a blueprint on how—and how not to—tackle the security and privacy questions around the technology.

Facial recognition technology uses a database of known subjects to identify individuals in photographs and videos. Law enforcement agencies and many businesses have embraced the technology, but mounting evidence has shown the technology can be misused in various ways. Several cities, including San Francisco and Oakland, have banned local government agencies from using facial recognition, and some states, including California, Oregon, and New Hampshire, have banned facial recognition from being used with police bodycams. Washington’s law goes further than just bodycams and applies to all public agencies in the state.

Washington’s law, passed March 13 and signed March 31 by Gov. Jay Inslee, established rules governing facial recognition software, such as requiring government agencies to obtain a warrant to run facial recognition scans in investigations, except in the case of emergency. There must be a way to independently test the facial recognition software for “accuracy and unfair performance differences” across skin color, gender, and age. There are also provisions in the legislation requiring training on how to use facial recognition and regular public reporting on how the technology is actually being used. All agencies using facial recognition software to make decisions that produce “legal effects” (meaning decisions that could affect a person’s job, financials, housing, insurance, and education) must have a human review the results, as well.

Microsoft president Brad Smith praised the new regulations in a blog post, calling the law an “early and important model” and “a significant breakthrough.” Smith has previously appealed for a regulatory framework around facial recognition for law enforcement and companies to follow. Banning facial recognition over security and privacy concerns didn’t make sense because the technology could be useful in many applications (such as finding missing persons). The new law established civil liberty safeguards while preserving the public safety benefits, Smith said.

"This balanced approach ensures that facial recognition can be used as a tool to protect the public, but only in ways that respect fundamental rights and serve the public interest," Smith wrote.

The law “in no way absolves tech companies of their broader obligations to exercise self-restraint and responsibility in their use of AI.”

The law did not go far enough to protect marginalized groups, said Jennifer Lee, head of the ACLU of Washington’s Technology and Liberty Project. Agencies could “use face surveillance technology to deny people essential services and basic necessities such as housing, health care, food, and water.” The “human review” specified in the law was not a “sufficient safeguard,” because humans also have biases and cannot provide the necessary oversight over these kinds of critical decisions, Lee said.

It also did not help that the law has no enforcement mechanism to ensure the agencies followed the provisions.

“We will continue to push for a moratorium to give historically targeted and marginalized communities, such as Black and Indigenous communities, an opportunity to decide not just how face surveillance technology should be used, but if it should be used at all,” Lee said in a statement.

The law originally called for a task force to study how government agencies use facial recognition, but Gov. Inslee vetoed that part of the legislation, saying funding was unavailable. Lawmakers should instead solicit advice from local universities, Inslee said. ACLU’s Lee said the veto removed “any semblance of community oversight.”

The law “in no way absolves tech companies of their broader obligations to exercise self-restraint and responsibility in their use of AI,” Smith warned. In the absence of other state laws and federal regulations, technology companies need to voluntarily adopt and implement responsible AI principles, he said.

“Some will argue it does too little. Others will contend it goes too far.”

Facial recognition systems have come under increased scrutiny by lawmakers and privacy advocates recently. An experiment by the American Civil Liberties Union found that Amazon’s Rekognition software incorrectly matched 28 members of Congress with a mugshot database of people arrested for committing a crime. The ACLU also recently sued the federal government demanding more details on how border control agents scan travelers’ faces at the United States border, as well as the government’s plans on expanding its facial recognition programs.

Smith has previously asked Congress to regulate the use of facial recognition technology. Other companies working on facial recognition software have also expressed support for regulation.

“It’s [facial recognition] a perfect example of something of that has really positive uses, so you don’t want to put the brakes on it. At the same time there’s lots of potential for abuses of that technology, so you do want regulation,” Amazon CEO Jeff Bezos said last year, according to GeekWire. “Good regulation in this arena would be very welcome I think by all the players.”

While the likelihood of Congress acting anytime soon is low, federal lawmakers have asked law enforcement agencies how the technology is being used and its accuracy. Smith said Washington’s new law was “an early and important model,” and that regulation “will clearly evolve.” The law shows what legislators can do when they stop arguing about whether facial recognition should be used and focus on how it should be used.

“A real-world example for the specific regulation of facial recognition now exists,” Smith said in the blog post. “Some will argue it does too little. Others will contend it goes too far. When it comes to new rules for changing technology, this is the definition of progress.”