Obviously, it can be quite difficult for Facebook to oversee the content of more than 2 billion users, and it's bound to make mistakes doing so. But with so much reach and influence, this kind of incredible tone-deafness is upsetting. And what's even more upsetting is that the policy is secret at all.

According to internal training documents that ProPublica obtained, Facebook defines hate speech as an attack against a protected category. Examples of protected categories include race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability or disease.

That all sounds well and good, except that "subsets" of these categories are considered fair game. So when US Rep. Clay Higgins called for the murder of "radicalized Muslims," that was OK (since "radicalized" is a modifier) but when activist Didi Delgado said "All white people are racist," that was not, since it's a statement that targeted an entire race.

This ignores the fact that calling for the deaths of people is quite a bit more violent than declaring a group of people is racist. It is, however, in line with an earlier report from The Guardian that claims Facebook's policies allow certain violent speech if it doesn't pose a "credible threat."

Here's the quiz Facebook has given to its "content reviewers" pic.twitter.com/zv8hS27H0A — Julia Angwin (@JuliaAngwin) June 28, 2017

And if that seems unfair to you, ProPublica uncovered an internal presentation slide that made the distinction all the more disturbing. In it was a quiz with a question: "Which of the below subsets do we protect?" The provided options were "female drivers," "black children" and "white men." The correct answer, according to Facebook, is "white men" because race and gender are protected categories, but "drivers" and "children" are not.

It's a bizarre answer, not only because it doesn't make sense, but because it is so very culturally tone-deaf. As ProPublica points out, this logic assumes all races and genders are equal and ignores the reason hate-speech laws exist: to protect those who are marginalized in society.

At the same time, Facebook is, of course, not a country; it's a private company. It doesn't create laws, and it can't throw people in actual jail for shouting slurs. It's a corporation with a global platform, and it needs to apply its rules universally, which is difficult when different countries have different customs. Germany, for example, has recently instituted a law whereby it can sue Facebook for up to $57 million for hate speech, whereas countries like the US have no such law.

It can't be denied that moderating speech for so many people is a hard and complicated task, which is why Facebook has hired more than 3,000 extra moderators to help police its site. Richard Allan, Facebook's VP of public policy in Europe, the Middle East and Asia, does acknowledge, however, that it can still make mistakes: "We're not perfect when it comes to enforcing our policy. Often, there are close calls -- and too often we get it wrong."

Which is why the recent ProPublica report is so alarming. While it's encouraging to hear that Facebook is taking hate speech seriously, the way it's going about it seems pretty terrible. It's not enough to know that it's going to ban hate speech. We need to know the exact guidelines for how it defines it, too.