Have you ever seen a post on Facebook that you were surprised wasn’t removed as hate speech? Have you flagged a message as offensive or abusive but the social media site deemed it perfectly legitimate?

Users on social media sites often express confusion about why offensive posts are not deleted. Paige Lavender, an editor at HuffPost, recently described her experience learning that a vulgar and threatening message she received on Facebook did not violate the platform’s standards.

Here are a selection of statements that combine examples from a Facebook training document with real-world comments found on social media. Most readers will find them offensive. But can you tell which ones would run afoul of Facebook’s rules on hate speech?

Hate speech is one of several types of content that Facebook reviews, in addition to threats and harassment. Facebook defines hate speech as:

An attack, such as a degrading generalization or slur. Targeting a “protected category” of people, including one based on sex, race, ethnicity, religious affiliation, national origin, sexual orientation, gender identity, and serious disability or disease.

Facebook’s hate speech guidelines were published in June by ProPublica, an investigative news organization, which is gathering users’ experiences about how the social network handles hate speech.

Danielle Citron, an information privacy expert and professor of law at the University of Maryland, helped The New York Times analyze six deeply insulting statements and determine whether they would be considered hate speech under Facebook’s rules.