The social network has attempted to build a global ruleset of what does, and what does not, constitute hate speech. But the logic that it uses is likely to provoke outrage in users who feel that the company already fails to do enough to tackle radicalization.

Facebook only deletes hate speech if it is directed against a group that it determines is a "protected category." pic.twitter.com/0P6qPYh9qD — Julia Angwin (@JuliaAngwin) June 28, 2017

On one hand, Facebook appears to believe that you cannot, or should not, criticize someone for the things that are out of their control. For instance, a slur made against someone's race, gender, sexuality and religious affiliation, amongst others, would be marked for deletion. But those factors that are less central to identity -- in Facebook's eyes -- are fair game for comment, such as a person's social class, job, appearance, age or religion.

If that's a little wooly, it may be easier to explain using a hypothetical situation in which the following statements were made:

"Let's hunt and kill radicalized muslims, for the sake of all that is good and righteous."

"Let's hunt and kill all black children, for the sake of all that is good and righteous."

"Let's hunt and kill all white men, for the sake of all that is good and righteous."

According to ProPublica's report, the first two statements are considered fair comment because they only target a subsection of a group. Advocating the murder of the members of an entire race or religion would be protected, but a subset -- the radicalized muslims, the black children -- are not. The third statement, since it is targeting an entire race and gender, would be moderated off the platform.

Facebook's strange formula allows Rep. Clay Higgins' call for violence against "radicalized Muslims" because it attacks a subgroup pic.twitter.com/UkJe85Soqh — Julia Angwin (@JuliaAngwin) June 28, 2017

One document sourced by ProPublica seems to ask moderators which groups "do we protect" from a list of female drivers, black children and white men. The answer is the final group, for the same logic that was outlined above, although it's likely that many will be horrified that the sentiment was expressed at all.

Facebook's head of global policy management, Monika Bickert, is quoted in the report, saying that its policies "do not always lead to perfect outcomes." But Bickert stresses that the company's rules need to apply to a "global community" with "very different ideas about what is OK to share."

The report also accuses the company of drafting its comment rules in a way that favors businesses and governments against individual citizens. For instance, documents seen by ProPublica claim to encourage censorship of comments made that encourage resistance of "an internationally recognized state." Facebook moderators have reportedly deleted posts from activists and journalists in a variety of disputed territories and authoritarian regimes.

Here's the quiz Facebook has given to its "content reviewers" pic.twitter.com/zv8hS27H0A — Julia Angwin (@JuliaAngwin) June 28, 2017

Unfortunately for Facebook, this is the latest in a long series of scandals concerning how it deals with offensive and illegal content. Earlier this year, a trove of documents relating to its moderation policies was leaked, revealing that it will not delete violent language unless it presents a "credible" threat. The company also pledged to hire 3,000 extra moderators to help deal with extremist and violent content, although this may still not be enough. Facebook has not yet responded to a request for comment, although we expect an update on this story in the near future.