Every day, Facebook users upload hundreds of millions of photos to the social network. If they haven’t opted out, the software scans those photos in search of faces it recognizes. As users either agree or disagree with the recommendations of who should be tagged, Facebook’s algorithms get better. The company’s research suggests that Facebook holds “the largest facial dataset to date”—powered by DeepFace, Facebook’s deep-learning facial recognition system.



Unlike Amazon’s Rekognition, which is facial recognition software that scans existing databases provided by clients like law enforcement agencies, Facebook’s system doesn’t need an external trove of face photos to work. Facebook has all that data because we upload it—pictures from different stages of our lives, from various angles, with different clothes and haircuts, in and out of makeup, with new tattoos—every day. Facebook knows it’s us because even if we haven’t tagged ourselves, one of our friends might have.



Right now, helping us tag each other is the only application of this software, so no one is worrying about it too much—not the way they’re worrying about other uses of facial recognition, like Immigration and Customs Enforcement’s scanning of millions of driver’s license photos from state databases . In May, San Francisco became the first city to ban its police department from using the technology , citing studies that have shown face ID tech misidentifies darker-skinned people more than lighter-skinned ones. Policymakers and experts are now beginning to weigh how the government’s use of facial recognition should be regulated and constrained.



But government surveillance usually relies on various forms of corporate surveillance—which is just one reason why Facebook’s program and others should be part of the same conversation. The NSA programs revealed by Edward Snowden tapped U.S. tech companies, like Facebook, Google, and Microsoft, in order to conduct their dragnet data collection. Local police departments and the FBI are using Amazon’s Rekognition now, which has sparked protests from both Amazon employees and anti-surveillance activists. It would be a mistake to not apply the similar scrutiny to private data surveillance, even when the use of it seems unobjectionable. And Facebook has enough data to potentially pose a facial recognition nightmare.



Facial recognition is already being used in commercial settings, like in sports stadiums that read the reactions of fans during the course of a game to improve targeted advertising. It’s being marketed to retailers who want to be able to identify shoplifters . The New York Times used Rekognition to identify people in the crowd during the royal wedding of Prince Harry and Meghan Markle last year. But so far, Facebook hasn’t offered any uses beyond tagging for its software. One reason may be that the company is currently bound by an agreement with the Federal Trade Commission that says it first has to first obtain “affirmative express consent” before going beyond a user’s specified privacy settings. That agreement only lasts for another 12 years. By that time, we may have warmed up to the idea of using our faces to do things like pay for purchases at stores, board flights, or even open doors—just as we already use them to open our phones.



When considering how a company as massive as Facebook might use its wealth of facial recognition data, Siva Vaidhyanathan—a University of Virginia professor, Slate contributor , and the author of Antisocial Media , a book about Facebook—suggests thinking in terms of the way Facebook already works around the web. “Facebook wants to be able to certify identity in a variety of areas of life just as it has been trying to corner the market on identify verification on the web,” Vaidhyanathan says. When you go to comment on a news article or sign in to book an appointment at a hair salon online, you might use your Facebook or Google account to log-in.

“The payoff for Facebook is to have a bigger and broader sense of everybody’s preferences, both individually and collectively. That helps it not only target ads but target and develop services, too,” Vaidhyanathan says. Facebook’s knowledge about us, after all, is only as good as its ability to identify that the data it has collected is assigned to the correct user. Otherwise Facebook could be sending highly personalized ads to the wrong person.



CEO Mark Zuckerberg likes to say that Facebook doesn’t sell your data. That’s true, in that there’s no Facebook store where advertisers can go and shop for Sheryl Sandberg’s personal data. Rather, Facebook sells the ability for advertisers to reach a very specific subset of people based on the data Facebook has. As Facebook seeks to expand beyond its social networks (remember its ill-fated Facebook phone and Portal, the video chat device Facebook sells), the company could market its facial recognition services to airlines, retail credit card processors, or any brick-and-mortar operation that wants to verify customers’ identity with a quick photo.



Just as it doesn’t outright sell the data from all the comments and likes you leave, Facebook told me it also doesn’t allow third parties to access its face database. “Even if someone were able to get access to a template, we’ve built it using a way that’s intentionally not interoperable with the standards that other face recognition systems use,” Rochelle Nadhiri, a spokesperson for Facebook, wrote in an email. Any further use of the facial data, then, would require Facebook to do the interpretation of the images, just as Facebook interprets our preferences rather than handing over raw profile data to advertisers.



One major problem with all of this is, well, it’s Facebook. It may not sell your history of likes, but it’s broken promises to users about how it handles their data and possibly even the law . The Cambridge Analytica scandal was a reminder that, for years, it allowed third-party developers to cart tons of user data out of the social network. Facebook has allowed us to connect in ways that have fundamentally changed how our communities are organized, but it’s also primarily an ad company. The prioritization of that business has allowed for racist housing discrimination and anti-Semitic ad targeting that Facebook only clamped down on after it became a public embarrassment. There is very good reason to worry that if Facebook ever decides to make additional use of its massive trove of name-to-face data—perhaps as an opt-in form of ID—it will do so in a way that could disrespect users’ trust.



There’s been some momentum to rein in how governments use face recognition. In addition to cities like San Francisco and Sommerville, Massachusetts, making moves to ban law enforcement from using facial recognition systems, legislators at the federal level are now paying attention. A hearing in Congress in May about facial recognition technology left lawmakers on both sides of the aisle expressing concerns about how unregulated the use of facial surveillance currently is. The Washington Post’s reporting this week on how the FBI and ICE obtain access to state databases is particularly troubling because many of these agencies’ requests are made regardless of whether someone is suspected of a crime and despite the fact that some states have actively encouraged undocumented immigrants to get driver’s licenses.



The conversation over how to use this technology shouldn’t stop with government use. Facebook’s facial recognition database could be benign, or it could be a slow-bubbling volcano. All we really know is that it’s massive enough to cause a lot of trouble.



Future Tense is a partnership of Slate , New America , and Arizona State University that examines emerging technologies, public policy, and society.

Source: slate.com