The Interface is a daily column and newsletter about the intersection of social media and democracy. Subscribe here .

Yesterday, as I tried to sort through Twitter’s decision to ban political ads, I got a tantalizing tip from a new source. Cognizant, the professional services company I have spent much of this year investigating over the dire conditions of its workplaces, was exiting the content moderation businesses.

To my surprise, the tip turned out to be true. The company announced it in an earnings call on Wednesday, without mentioning the names of Facebook, Google, or any of its other clients. Later that day, Facebook provided me with a statement from Arun Chandra, the company’s vice president of scaled operations.

“We respect Cognizant’s decision to exit some of its content review services for social media platforms,” Chandra said. “Their content reviewers have been invaluable in keeping our platforms safe — and we’ll work with our partners during this transition to ensure there’s no impact on our ability to review content and keep people safe.”

How did we get here?

As I wrote in The Verge:

In February, The Verge published an investigation into working conditions at the company’s site in Phoenix. Moderators at the site described being diagnosed with post-traumatic stress syndrome after being subjected to a daily onslaught of graphic and disturbing images. Others said they had come to embrace fringe viewpoints after seeing videos about conspiracy theories on a regular basis. Multiple employees reporting fearing for their safety after being threatened by coworkers. A follow-up report in June focused on a site in Tampa, FL, where moderators broke their non-disclosure agreements to describe a pattern of mistreatment by managers. They described working in offices that were often filthy, and where cases of sexual harassment had resulted in multiple complaints being filed with the Equal Employment Opportunity Commission.

Cognizant intends to finish out their contracts, which will begin to wrap up March 1st and then wind down throughout the remainder of 2020. Both of the sites I visited are closing as a result of Cognizant’s announcement yesterday, affecting more than 6,000 employees around the world.

Cognizant’s official reason for getting out of the business is that “this subset of work is not in line with the company’s strategic vision,” which could mean anything. Bloomberg, citing various analysts, said that over time the company has gotten worse at sales — particularly in attracting digital businesses like tech platforms. (It still made $499 million in profits last quarter, on the backs of thousands of employees making $15 an hour.) The Business Standard reported that Cognizant earned between $240 million and $270 million annually from content moderation.

A memo from CEO Brian Humphries to all employees that someone sent me let them know that, while thousands of jobs would be eliminated, Cognizant would make a donation intended to spur the development of machine-learning systems that can take the place of human moderators:

While we intend to exit this work, we recognize that cleansing the web of objectionable content is a worthy cause and one in which companies have a role to play. For this reason, we have decided to allocate $5 million to fund research aimed at increasing the level and sophistication of algorithms and automation, thereby reducing users’ exposure to objectionable content.

It was not clear where Cognizant plans to make that donation.

Facebook said it would make up for the loss by increasing the number of moderators it has working at a site in Texas, which is operated by Genpact.

Twitter wouldn’t tell me how heavily it relied on Cognizant, but a spokeswoman said: “The team is on it and working through the changes to make sure we’re supporting the people doing this work while also prioritizing keeping people on Twitter safe.”

Google, which uses Cognizant for moderation services in Poland and India, among other places, did not respond to my request for a comment.

Moderators that I heard from over the past day reacted in various ways: with anger at losing their jobs, with shock at the suddenness of the announcement, and with relief that the sites where they were traumatized are going away.

“It is a mental and spiritual relief,” one former moderator at the Tampa site told me. “I still have nightmares about the content, but that will eventually go away.”

One source who worked as a manager told me that they believed Cognizant had acted to reduce its legal liability at a time when vendors are beginning to face lawsuits from former moderators who now have PTSD, along with a spate of sexual harassment complaints.

On one level, Cognizant’s exit from the moderation business probably won’t change much at the big tech platforms. There are plenty of other vendors to choose from — and as long as companies are offering $200 million contracts, as Facebook had for Cognizant, there always will be.

Still, the move speaks to the severe difficulty of this work, and the serious toll it takes day to day on thousands of people. The toll was so severe that a huge consulting company decided to quit the business rather than work to develop a fix. Maybe it felt like it couldn’t, given the constraints that the platforms put it under. (Facebook dictated nearly everything about the Cognizant contract, down to the office decor.)

A year ago, Vice called content moderation at Facebook “the impossible job.” This week, Cognizant announced that it had, indeed, found the work impossible. It’s not just Facebook that gets in over its head sometimes. It turns out its vendors do too.

Thank you

Thanks to everyone who came to The Glass Room this week to hear me talk with Clara Tsao about the many challenges of content moderation. As with last week’s event, I had a great time meeting so many newsletter readers in person and hearing your questions.

The Glass Room, incidentally, is a phenomenal, free exhibit about data and society that is way more fun to browse than it sounds. Highlights include a printed Rolodex of historical Mark Zuckerberg apologies, a machine where you can buy Instagram followers, and a cabinet full of “empty data sets” — things we ought to keep track of but don’t. It’s easily accessible from BART and Muni, and runs through Sunday. Don’t miss it — and whether or not you attend, I invite you to watch my talk with Clara here.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending up: YouTubers have raised more than $10 million as part of a fundraiser to plant trees across the globe, thanks to donations from Elon Musk and Susan Wojcicki. The goal is to raise $20 million by the end of the year.

Trending up: Twitter is getting better at surfacing abusive tweets using machine learning and in suspending those accounts, according to its latest transparency report. Of concern, though: legal demands from countries went up 67 percent.

Trending down: A nationwide Airbnb scam involving fake listings and reviews shows how easy it is to take advantage of the company’s community standards and lax enforcement policies. Content moderation — it’s not just for social platforms!

Governing

⭐ Israeli cybersurveillance firm NSO Group hacked WhatsApp to spy on top government officials across 20 counties — a much wider group than was previously reported. The news could have broad political and diplomatic consequences, say Christopher Bing and Raphael Satter at Reuters:

Over the last several years, cybersecurity researchers have found NSO products used against a wide range of targets, including protesters in countries under authoritarian rule. The use of these tools to target high-profile politicians, however, is less understood. An independent research group working with WhatsApp, named CitizenLab, said at least 100 of the victims are journalists and dissidents, not criminals. WhatsApp has said it sent warning notifications to affected users earlier this week. “It is an open secret that many technologies branded for law enforcement investigations are used for state-on-state and political espionage,” said John Scott-Railton, a senior researcher with CitizenLab.

WhatsApp is suing NSO for violating the Computer Fraud and Abuse Act. It’s a novel application of the CFAA, which is typically used to punish people who breach the computers of companies rather than customers. I hope it works! (Andy Greenberg / Wired)

Elsewhere, a day after the lawsuit was filed, Facebook apparently deleted the accounts of NSO employees for breaching the platform’s terms of service. (Dan Goodin / Ars Technica)

Local politicians in Washington continue to run ads on Facebook and Google, even after the companies banned them from doing so last year. The state has some of the strictest campaign finance laws in the country, but enforcement hasn’t been easy. (Makena Kelly / The Verge)

The ACLU sued the Justice Department and the FBI over their use of facial recognition technology. They’re arguing that the agencies have secretly implemented a nationwide surveillance technology that threatens Americans’ privacy and civil rights. (Drew Harwell / The Washington Post)

Half the country votes on machines made by ES&S. The company has had repeated controversies over losing votes and faulty machines. They’ve maintained their dominance through a lack of regulation and oversight. Sound familiar? (Jessica Huseman / ProPublica)

Some people are suggesting that the US needs a hate speech law. The idea is that the First Amendment was designed for a pre-internet era and is no longer sufficient for this country. (On the other hand, the First Amendment is good.) (Richard Stengel / The Washington Post)

Aaron Sorkin, the screenwriter behind The Social Network and Steve Jobs, wrote an op-ed in The New York Times urging Mark Zuckerberg to rethink his stance on allowing politicians to lie in Facebook ads. This absolutely cursed take was overshadowed by people pointing out that Sorkin’s movies are wildly fictionalized, making his sanctimony more than a little rich. (Aaron Sorkin / The New York Times)

Americans’ trust in local news is being exploited by networks of impostor sites that promote ideological agendas. Some are backed by conservative think tanks. (Brendan Nyhan / The New York Times)

The developer behind the popular text- and code-editing software Notepad++ released a “Free Uyghur” edition to promote awareness for the ethnic minority’s persecution in China. The project was immediately swamped with Chinese spam. (Colin Lecher / The Verge)

Employees at the data visualization company Tableau held a rally in Seattle, asking leadership to sever ties with ICE and Customs and Border Protection. This is the latest in a series of employee protests over government contracts with ICE. (Monica Nickelsburg / GeekWire)

Industry

⭐ A dangerous fake cancer cure continues to flourish in Facebook groups. “Black salve” burns through human skin, but Facebook says that groups don’t violate its community guidelines. Katie Notopoulos reports at BuzzFeed)

Even as Facebook has cracked down on anti-vaxxers and peddlers of snake oil cure-alls, a particularly grotesque form of fake cancer treatment has flourished in private groups on Facebook. Black salve, a caustic black paste that eats through flesh, is enthusiastically recommended in dedicated groups as a cure for skin and breast cancer — and for other types of cancer when ingested in pill form. There’s even a group dedicated to applying the paste to pets. A Facebook spokesperson told BuzzFeed News that these groups don’t violate its community guidelines. This summer, it launched an initiative to address “exaggerated or sensational health claims” and will downrank that content in the News Feed, similar to how it handles clickbait. But it’s not clear how it defines what a “sensational” health claim is. Citing user privacy, Facebook would not say whether or not it had downranked the black salve groups in the News Feed.

Dead Facebook users could outnumber living ones within 50 years, according to research from Oxford Internet Institute. If users continue to increase 13 percent every year, there will be 4.9 billion dead users by 2100. I look forward to being one of them.

Dr. Susan Desmond-Hellmann, the CEO of the Bill and Melinda Gates Foundation, is stepping down from Facebook’s board of directors. She said in a statement that she’s going to focus on her health and family.

A confidential document from Sidewalk Labs revealed the founding vision of the Google-affiliated urban development company. It included having the power to levy its own property taxes, and to track and predict people’s movements. Sounds like a fun tourist destination! (Tom Cardoso and Josh O’Kane / The Globe and Mail)

Sarah Emerson tracked an AmazonBasics battery back to its point of origin. The company has historically been fiercely secretive about its corporate footprint, masking its operations through a discreet network of third-party vendors. (Sarah Emerson / OneZero)

And finally...

I’m not typically one to call anything an “epic clapback” — but by Mark Zuckerberg’s typically low-key standards, I have to say that this qualifies. In response to Aaron Sorkin’s bloviating in the New York Times, Zuckerberg simply quoted Sorkin’s own screenplay for The American President:

"America isn't easy. America is advanced citizenship. You gotta want it bad, 'cause it's gonna put up a fight. It's... Posted by Mark Zuckerberg on Thursday, October 31, 2019

God I miss that version of Sorkin.

Talk to us

Send us tips, comments, questions, and alternative content moderation vendors: casey@theverge.com and zoe@theverge.com.