NEO-NAZIS ARE no longer welcome on Twitter — or, at least, some are not.

That is the murky message the social media platform has so far sent in enforcing its new rules to reduce hateful and abusive content. Twitter announced the policy change in November and started suspending violators this month. Executives are right that their hands-off approach to harassment was not working, but the public outcry from both sides so far is a sign that the alternative will pose problems of its own.

Twitter's new policies are a welcome answer to charges that the company provided a public platform to white supremacists leading up to this summer's deadly rally in Charlottesville. The rules prohibit "accounts that affiliate with organizations that use or promote violence against civilians" both online and off as well as "content that glorifies violence or the perpetrators of a violent act." Preexisting restrictions on "behavior that harasses, intimidates or uses fear to silence another person's voice" have been expanded to include hateful imagery.

This specificity marks a significant change from earlier this year, when the site's slapdash method of tackling complaints, as recent reporting by BuzzFeed has revealed, suggested that few inside the company understood their own policies. The current status quo is a step forward. But the rules are still subjective, and that subjectivity opens the door to treading on free speech.

The top official of far-right British group Britain First from whose account President Trump retweeted anti-Muslim videos last month, for example, received a suspension almost immediately after Twitter began cracking down. So did many American white nationalists. But David Duke and Richard Spencer were spared. The lines, it appears, are blurry.

Twitter will work with experts around the world to identify extremist organizations, but amorphous movements such as the so-called alt-right purposely dance around the explicit incitement the site wants to root out. It may prove difficult to determine whether a group or an individual account or tweet glorifies or provokes violence, in either intention or effect. And that confusion could lead to the stifling of some speech that is unsavory but should remain permissible.

Twitter has promised a "robust" appeals process. It will also ask users who are not affiliated with violent groups to delete offending content before it permanently suspends them, with some exceptions. And Twitter will place hateful imagery that appears in tweets behind a warning wall, instead of removing it altogether. These are all thoughtful efforts on Twitter's part to walk the line on unfettered expression. They do not make that walk any less precarious.

Twitter deserves credit for recognizing its role as a publisher and not just a platform. But as the site steps toward maturity, its leadership also must recognize that Twitter's unusually powerful position in society comes with the responsibility of protecting and promoting the values that made its success possible in the first place.