Photo: AP

In a new report from Pro Publica, Facebook apologized for inconsistently policing hate speech on its platform. Pro Publica reporters submitted 49 samples of posts from users who believed moderators made the wrong call, either by removing legitimate expression or by allowing hate speech to remain online. In 22 of those cases, Facebook admitted its content reviewers made the wrong call. Acknowledging the mistakes, Vice President Justin Osofsky promised to double the size of its content review team to 20,000 in 2018.


When a dozen people flagged the Facebook group “Jewish Ritual Murder,” the site claimed there was no violation of its community standards. Similarly, when another user flagged a meme with “the only good Muslim is a fucking dead one,” displayed over the body of a man who’d been shot in the head, they were told, via an automated message: “We looked over the photo, and though it doesn’t go against one of our specific Community Standards, we understand that it may still be offensive to you and others.”

Facebook later reversed both decisions after Pro Publica submitted them, as part of its report.


“We’re sorry for the mistakes we have made — they do not reflect the community we want to help build,” Osofsky told Pro Publica in a statement. “We must do better. Our policies allow content that may be controversial and at times even distasteful, but it does not cross the line into hate speech. This may include criticism of public figures, religions, professions, and political ideologies.”

A disturbing report from the Wall Street Journal on Wednesday profiling content moderators found that workers across Silicon Valley are given only a few minutes to review flagged items. That time may not enable moderators to develop a clear, consistent logic for hate speech or to clearly distinguish between critiquing a religion (which is protected) and attacking one (which is not).



Facebook, like Twitter, Youtube, et al, must confront two serious issues. The first is scale. With two billion users, the amount of flagged content to review is immense. At present, there’s no sustainable way for Facebook to scale its moderation efforts with how much content users produce. Silicon Valley is turning to algorithms to help, but nothing indicates that machines will be a quick fix.

Second, Facebook has long valorized content-neutrality and the First Amendment, essentially saying platforms should be laissez-faire in policing content unless absolutely necessary. This leads to vague and murky rules on hate speech, because they’re designed to trigger as little direct intervention from the platform as possible. How does minimal intervention work at this scale? It doesn’t, and Facebook knows it. Charlottesville’s debate on platform accountability was the first reckoning for hazily defined rules on hate speech, and reports like Pro Publica’s tease that many more are to come.


[Pro Publica]