As queer artists and activists who have challenged Facebook’s “real names” policy for three years, we’re alarmed by a new trend: Many LGBTQ people’s posts have been blocked recently for using words like “dyke,” “fag,” or “tranny” to describe ourselves and our communities.

WIRED OPINION ABOUT Dottie Lux (@redhotsburlyq) is an event producer and the creator of Red Hots Burlesque, a queer burlesque and cabaret; she is also a co-owner at San Francisco’s Legacy Business The Stud Bar. Lil Miss Hot Mess (@lilmisshotmess) is a PhD student in media studies at NYU by day and a drag queen by night. Both are organizers with the #MyNameIs campaign.

While these words are still too-often shouted as slurs, they’re also frequently “reclaimed” by queer and transgender people as a means of self-expression. However, Facebook’s algorithmic and human reviewers seem unable to accurately parse the context and intent of their usage.

Whether intentional or not, these moderation fails constitute a form of censorship. And just like Facebook’s dangerous and discriminatory real names policy, these examples demonstrate how the company’s own practices often amplify harassment and cause real harm to marginalized groups like LGBTQ people, communities of color, and domestic violence survivors—especially when used as a form of bullying to silence other users for their identities or political activities.

For example, we’ve received reports from several people whose posts about their LGBTQ activism were taken down. Ironically, one was attorney Brooke Oliver, who posted about a recent Supreme Court ruling related to her historic case that won Dykes on Bikes (a group of motorcycle-riding lesbians that traditionally leads gay pride parades) a trademark.

Two individuals wrote that they were reported for posting about the return of graphic novelist Alison Bechdel’s celebrated Dykes To Watch Out For comic strip. One happened to be Holly Hughes, who is no stranger to censorship: She’s a performance artist and member of the infamous NEA Four. A gay man posted that he was banned for seven days after sharing a vintage flyer for the 1970s lesbian magazine DYKE, which was recently featured in an exhibition at the Museum of the City of New York. A queer poet of color’s status update was removed for expressing excitement in finding poetry that featured the sex lives of “black and brown faggots.”

A young trans woman we heard from was banned for a day after referring to herself as a “tranny” alongside a selfie that proudly showed off her new hair style. After she regained access, she posted about the incident, only to be banned again for three more days. She also highlighted double-standards in reporting, noting that in her experience men often use the term to harass her, but are rarely held accountable. Many others also shared stories of reporting genuinely homophobic, transphobic, racist, and sexist content, only to be told it didn’t violate Facebook’s “Community Standards.”

Additionally, former RuPaul’s Drag Race contestant Honey Mahogany was unable to purchase an ad featuring the hashtag #blackqueermagic for an event that features a cast of African-American performers. It turns out that Facebook prohibits ads with “language that refers to a person’s age, gender, name, race, physical condition, or sexual orientation” (though it is easy enough to target users based on identity regardless). While such policies may rightfully prevent discrimination in legally protected areas like employment or housing, they cast too wide a net and ultimately discriminate against communities in cases like this.

And these stories are just the tip of the iceberg. Facebook, of course, has recently seen many public controversies for (temporarily) removing content like Nick Ut’s famous photo of Kim Phúc fleeing a napalm attack and video of Philando Castile’s murder by police.

Interestingly, in a recent blog post on the difficulty of moderating hate speech, Facebook vice president Richard Allan offered “dyke” and “faggot” as challenging examples, noting that, “When someone uses an offensive term in a self-referential way, it can feel very different from when the same term is used to attack them.”

However, as with its real names policy, while Facebook’s intentions may be noble, its algorithms and human-review teams still make too many mistakes. The company is also increasingly under pressure from users, groups, and now governments to improve its procedures—Germany just passed legislation requiring social media companies to remove hate speech.

We’ve identified four interrelated problems.

First, Facebook’s leadership doesn’t seem to understand the nuances of diverse identities. As leaked documents recently published by ProPublica indicate, its policies aim to prevent harassment of users based on “protected categories” like race, gender, and sexual orientation; however, by making exceptions for subsets of protected groups, the company’s protocols paradoxically “protect white men from hate speech but not black children,” as ProPublica reported. Such a color-blind and non-intersectional approach fails to acknowledge the ways in which groups discriminated against differently. (It is also not too surprising that Facebook ultimately protects white men, given its employee demographics.)