The Current Debate

The ACLU, along with nearly 70 other civil rights organizations, has asked Amazon to stop selling facial recognition technology to the government and further called on Congress to enact a moratorium on government uses of facial recognition technology. The media weighed in, and important voices expressed anxiety. Over at the Washington Post, the editorial board declared, “Congress should intervene soon.” Even some members of Congress — many of whom were recently misidentified by Amazon’s facial recognition software — are rightly worried.

We’re in the mix, too. Along with a group of other scholars, we asked Amazon to change its ways.

In response, Brad Smith, president of Microsoft, called for the U.S. government to regulate facial recognition tech. “The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself…This, in fact, is what we believe is needed today — a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission,” he wrote on Microsoft’s blog.

Corporate leadership is important, and regulation that imposes limits on facial recognition technology can be helpful. But partial protections and “well-articulated guidelines” will never be enough. Whatever help legislation might provide, the protections likely won’t be passed until face-scanning technology becomes much cheaper and easier to use. Smith actually seems to make this point, albeit unintentionally. He emphasizes that “Microsoft called for national privacy legislation for the United States in 2005.” Well, it’s 2018, and Congress has yet to pass anything.

If facial recognition technology continues to be further developed and deployed, a formidable infrastructure will be built, and we’ll be stuck with it. History suggests that highly publicized successes, the fear of failing to beef up security, and the sheer intoxicant of power will tempt overreach, motivate mission creep, and ultimately lead to systematic abuse.

The future of human flourishing depends upon facial recognition technology being banned before the systems become too entrenched in our lives.

Why a Ban Is Necessary

A call to ban facial recognition systems, full stop, is extreme. Really smart scholars like Judith Donath argue that it’s the wrong approach. She suggests a more technologically neutral tactic, built around the larger questions that identify the specific activities to be prohibited, the harms to be avoided, and the values, rights, and situations we are trying to protect. For almost every other digital technology, we agree with this approach.

But we believe facial recognition technology is the most uniquely dangerous surveillance mechanism ever invented. It’s the missing piece in an already dangerous surveillance infrastructure, built because that infrastructure benefits both the government and private sectors. And when technologies become so dangerous, and the harm-to-benefit ratio becomes so imbalanced, categorical bans are worth considering. The law already prohibits certain kinds of dangerous digital technologies, like spyware. Facial recognition technology is far more dangerous. It’s worth singling out, with a specific prohibition on top of a robust, holistic, value-based, and largely technology-neutral regulatory framework. Such a layered system will help avoid regulatory whack-a-mole where lawmakers are always chasing tech trends.

Surveillance conducted with facial recognition systems is intrinsically oppressive. The mere existence of facial recognition systems, which are often invisible, harms civil liberties, because people will act differently if they suspect they’re being surveilled. Even legislation that holds out the promise of stringent protective procedures won’t prevent chill from impeding crucial opportunities for human flourishing by dampening expressive and religious conduct.

Facial recognition technology also enables a host of other abuses and corrosive activities:

As facial recognition scholar Clare Garvie rightly observes, mistakes with the technology can have deadly consequences:

What happens if a system like this gets it wrong? A mistake by a video-based surveillance system may mean an innocent person is followed, investigated, and maybe even arrested and charged for a crime he or she didn’t commit. A mistake by a face-scanning surveillance system on a body camera could be lethal. An officer alerted to a potential threat to public safety or to himself, must, in an instant, decide whether to draw his weapon. A false alert places an innocent person in those crosshairs.

Two reports, among others, thoroughly detail many of these problems. There’s the invaluable paper written by Jennifer Lynch, senior staff attorney at the Electronic Frontier Foundation, “Face Off: Law Enforcement Use of Face Recognition Technology.” And there’s the indispensable study “The Perpetual Line-Up,” from Georgetown’s Center on Privacy and Technology, co-authored by Clare Garvie, Alvaro Bedoya, and Jonathan Frankle. Our view is deeply informed by this rigorous scholarship, and we would urge anyone interested in the topic to carefully read it.

Despite the problems our colleagues have documented, you might be skeptical that a ban is needed. After all, other technologies pose similar threats: geolocation data, social media data, search history data, and so many other components of our big data trails can be highly revealing in themselves and downright soul-searching in the aggregate. And yet, facial recognition remains uniquely dangerous. Even among biometrics, such as fingerprints, DNA samples, and iris scans, facial recognition stands apart.

Systems that use face prints have five distinguishing features that justify singling them out for a ban. First, faces are hard to hide or change. They can’t be encrypted, unlike a hard drive, email, or text message. They are remotely capturable from distant cameras and increasingly inexpensive to obtain and store in the cloud — a feature that, itself, drives surveillance creep.

Second, there is an existing legacy of name and face databases, such as for driver’s licenses, mugshots, and social media profiles. This makes further exploitation easy through “plug and play” mechanisms.

Third, unlike traditional surveillance systems, which frequently require new, expensive hardware or new data sources, the data inputs for facial recognition are widespread and in the field right now, namely with CCTV and officer-worn body cams.

Fourth, tipping point creep. Any database of faces created to identify individuals arrested or caught on camera requires creating matching databases that, with a few lines of code, can be applied to analyze body cam or CCTV feeds in real time. New York Governor Andrew Cuomo perfectly expressed the logic of facial recognition creep, insisting that vehicle license-plate scanning is insignificant compared to what cameras can do once enabled with facial recognition tech. “When it reads that license plate, it reads it for scofflaws…[but] the toll is almost the least significant contribution that this electronic equipment can actually perform,” Cuomo said. “We are now moving to facial recognition technology, which takes it to a whole new level, where it can see the face of the person in the car and run that technology against databases.” If you build it, they will surveil.

Finally, it bears noting that faces, unlike fingerprints, gait, or iris patterns, are central to our identity. Faces are conduits between our on- and offline lives, and they can be the thread that connects all of our real-name, anonymous, and pseudonymous activities. It’s easy to think people don’t have a strong privacy interest in faces because many of us routinely show them in public. Indeed, outside of areas where burkas are common, hiding our faces often prompts suspicion.

The thing is we actually do have a privacy interest in our faces, and this is because humans have historically developed the values and institutions associated with privacy protections during periods where it’s been difficult to identify most people we don’t know. Thanks to biological constraints, the human memory is limited; without technological augmentation, we can remember only so many faces. And thanks to population size and distribution, we’ll encounter only so many people over the course of our lifetimes. These limitations create obscurity zones, and because of them, people have had great success hiding in public.

Recent Supreme Court decisions about the 4th Amendment have shown that fighting for privacy protections in public spaces isn’t antiquated. Just this summer, in Carpenter v. United States, our highest court ruled by a 5–4 vote that the Constitution protects cellphone location data. In the majority opinion, Chief Justice John Roberts wrote, “A person does not surrender all Fourth Amendment protection by venturing into the public sphere. To the contrary, ‘what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.’”

Why Facial Recognition Technology Can’t Be Procedurally Regulated

Because facial recognition technology poses an extraordinary danger, society can’t afford to have faith in internal processes of reform like self-regulation. Financial rewards will encourage entrepreneurialism that pushes facial recognition technology to its limits, and corporate lobbying will tilt heavily in this direction.

Facial recognition technology is a menace disguised as a gift.

Society also can’t wait for a populist uprising. Facial recognition technology will continue to be marketed as a component of the latest and greatest apps and devices. Apple is already pitching Face ID as the best new feature of its new iPhone. The same goes for ideologically charged news coverage of events where facial recognition technology appears to save the day.

Finally, society shouldn’t place its hopes in conventional approaches to regulation. Since facial recognition technology poses a unique threat, it can’t be contained by measures that define appropriate and inappropriate uses and that hope to balance potential social benefit with a deterrent for bad actors. This is one of the rare situations that requires an absolute prohibition, something like the Ottawa Treaty on landmines.

Right now, there are a few smart proposals to control facial recognition technology and even fewer actual laws limiting it. The biometric laws in Illinois and Texas, for example, are commendable, yet they follow the traditional regulatory strategy of requiring those who would collect and use facial recognition (and other biometric identifiers) to follow a basic set of fair information practices and privacy protocols. These include requirements to get informed consent prior to collection, mandated data protection obligations and retention limits, prohibitions on profiting from biometric data, limited ability to disclose biometric data to others, and, notably, private causes of action for violations of the statutes.

Proposed facial recognition laws follow along similar lines. The Federal Trade Commission recommends a similar “notice, choice, and fair data limits” approach to facial recognition. The Electronic Frontier Foundation’s report, which focuses on law enforcement, contains similar though more robust suggestions. These include placing restrictions on collecting and storing data; recommending limiting the combination of one or more biometrics in a single database; defining clear rules for use, sharing, and security; and providing notice, audit trials, and independent oversight. In its model face recognition legislation, the Georgetown Law Center on Privacy and Technology’s report proposes significant restrictions on government access to face-print databases as well as meaningful limitations on use of real-time facial recognition.

Tragically, most of these existing and proposed requirements are procedural, and in our opinion they won’t ultimately stop surveillance creep and the spread of face-scanning infrastructure. For starters, some of the basic assumptions about consent, notice, and choice that are built into the existing legal frameworks are faulty. Informed consent as a regulatory mechanism for surveillance and data practices is a spectacular failure. Even if people were given all the control in the world, they wouldn’t be able to meaningfully exercise it at scale.

Yet lawmakers and industry trudge on, oblivious to people’s time and resource limitations. Additionally, these rules, like most privacy rules in the digital age, are riddled with holes. Some of the statutes apply only to how data is collected or stored but largely ignore how it is used. Others apply only to commercial actors or to the government and are so ambiguous as to tolerate all kinds of pernicious activity. And to recognize the touted benefits of facial recognition would require more cameras, more infrastructure, and face databases of all-encompassing breadth.

The Future of Human Faces

Because facial recognition technology holds out the promise of translating who we are and everywhere we go into trackable information that can be nearly instantly stored, shared, and analyzed, its future development threatens to leave us constantly compromised. The future of human flourishing depends upon facial recognition technology being banned before the systems become too entrenched in our lives. Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited. In such a world, critics of facial recognition technology will be disempowered, silenced, or cease to exist.