In December, Twitter announced new and stricter rules banning bigoted content and hate groups from its platform. It also said it would begin enforcing its anti-hate and violence rules more stringently than it has in the past.

The company was responding to pressure from its users, who have begged for both clearer rules and stronger enforcement for years.

“Freedom of expression means little if voices are silenced because people are afraid to speak up,” the document reads. That’s a new line for a company that had long insisted that—even in privately owned forums like its messaging service—only good speech could fight bad speech.

According to Twitter, the rules ban content that includes “a violent threat or multiple slurs, epithets, [and] racist or sexist tropes,” as well as material that “incites fear, or reduces someone to less than human.” They also prohibit groups that advocate violence against civilians.

Depending on how they’re interpreted, the new rules could give moderators a wide berth to suspend and ban users who encourage violence against civilians or propagandize for hate groups. The guidelines do not draw a distinction between user behavior on or off the site: If someone tweets only in coded language on Twitter, but calls for racial violence or genocide elsewhere on the web or in person, then they could still be banned from the service.

While logos or symbols affiliated with hate groups will not result in someone getting banned, they will carry a sensitive media tag, meaning that they will not automatically display to the site’s users.

But “context matters when evaluating for abusive behavior,” warned Twitter, and they included two big exceptions in the new policy. First, their ban on advocating violence against civilians does not apply to “military or government entities.” Second, they may moderate their own rules if “the behavior is newsworthy and in the legitimate public interest.”

It wasn’t hard to figure out the famous Twitter user to whom those loopholes most apply.

The two highest-profile users to get kicked off the service since the rule change are Jayda Fransen and Paul Golding, the leaders of Britain First, an ultranationalist and virulently anti-Islam U.K. political party and “street-defense organization.”

In November, President Trump retweeted a few of Fransen’s fake anti-Muslim videos to his more than 43 million followers. Theresa May condemned the president’s retweets, saying that Britain First spread “hateful narratives that peddle lies and stoke tensions.” Britain First has an estimated 1,000 followers in the United Kingdom.

These rules aren’t just an insurance policy for the company—they’ve already been used to shield the president from suspension. In September, when Trump warned in a tweet that “Little Rocket Man ... won’t be around much longer,” the company said that the threatening tweets didn’t violate its guidelines because they were “newsworthy.”

Now the company has slapped on another policy. Wednesday’s decision makes clear that heads of state—including President Trump—now receive the same monopoly on violence on Twitter that they already enjoy out in the world.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.