Uncovering and explaining how our digital world is changing — and changing us.

This post has been updated.

Twitter began enforcing new rules today to suspend users affiliated with hate groups “on and off the platform” — a policy that already has led to the disabling of some alt-right accounts.

Initially announced in November, Twitter also started penalizing users whose profiles include “hateful imagery and display names,” presumably including Nazi insignia, or those who use a “username, display name, or profile bio to engage in abusive behavior.”

For Twitter, the two new restrictions are attempts to combat rampant harassment and abuse on the site. Users affiliated with the alt-right or neo-Nazi movements in particular have seized on the company’s notoriously lax oversight to stoke racial tensions, peddle false news reports and attack their critics, including Democrats. Earlier this year, they organized a neo-Nazi rally in Charlottesville, Va., with the aid of the platform.

To that end, the Dec. 18 enforcement deadline left some of Twitter’s right-leaning users this weekend fearing a full, messy “purge.” Some said they’d be shifting to Gab, an alt-right-friendly social media site, and encouraged their supporters to do the same.

And by Monday morning, some alt-right accounts had indeed gone offline. That includes the account for the white supremacist group American Renaissance as well as some users tied to the far-right group, Britain First, including a user that President Donald Trump once retweeted.

“Today, we are starting to enforce these policies across Twitter,” the company said in a blog post today. “In our efforts to be more aggressive here, we may make some mistakes and are working on a robust appeals process. We’ll evaluate and iterate on these changes in the coming days and weeks, and will keep you posted on progress along the way.”

To be sure, Twitter never explicitly mentioned alt-right or neo-Nazi groups in the rules it first previewed in November. Rather, its new policy more broadly sought to outlaw “specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people.”

Notably, though, Twitter has said it would be monitoring groups’ behavior outside of the website, as it makes its decision as to which users have run afoul of its new guidelines.

“You also may not affiliate with organizations that — whether by their own statements or activity both on and off the platform — use or promote violence against civilians to further their causes,” the policy says.

For months, Twitter has felt pressure — from users in the U.S. and regulators around the world, particularly in Europe — to crack down on hate speech. Recently, the company has started stripping verification status — the infamous blue checkmarks — from users who violate its policies.

Sign up for the newsletter Recode Daily Email (required) By signing up, you agree to our Privacy Notice and European users agree to the data transfer policy. Subscribe