Twitter’s updated T&Cs look clearer — yet it still can’t say no to Nazis

Twitter has taken a pair of shears to its user rules, shaving almost 2,000 words off of its T&Cs — with the stated aim of making it clearer for users what is not acceptable behaviour on its platform.

It says the rules have shrunk from 2,500 words to just 600 — with each of the reworded rules now encapsulated within a pithy tweet length (280 characters or less).

Though each tweet-length rule is still followed by plenty of supplementary detail — where Twitter explains the rationale behind it and provides examples of what not to do, and details of potential consequences. So the full rule-book is still way over 2,500 words.

“Everyone who uses Twitter should be able to easily understand what is and is not allowed on the service,” writes Twitter’s Del Harvey, VP of trust and safety, in a blog post announcing the changes. “As part of our continued push towards more transparency across every aspect of Twitter, we’re working to make sure every rule has its own help page with more detailed information and relevant resources, with abuse and harassment, hateful conduct, suicide or self-harm, and copyright being next on our list to update. Our focus remains on keeping everyone safe and supporting a healthier public conversation on Twitter.”

The newly reworded rules can be found at: twitter.com/rules

We’ve listed the tweet-sized rules below, without any of their qualifying clutter:

Notably the rules make no mention of fascist ideologies being unwelcome on Twitter’s platform. Although a logical person might be forgiven for thinking such hateful stuff would naturally be prohibited — based on the core usage principles Twitter is stating here (such as a ban on threatening and/or promoting violence against groups of people including on the basis of their race, ethnicity and so on).

But for Twitter nazi-ism remains, uh, ‘complicated’.

The company recently told Vice it’s working with researchers to consider whether or not it should ban Nazis. Which suggests its new ‘pithier’ rules are missing a few qualifying asterisks.

Here, we fixed one:

You may not threaten violence against an individual or a group of people*. We also prohibit the glorification of violence**. *unless you’re a Nazi **white supremacists totally get a pass while we mull the commercial implications of actually banning racist hate

Another abuse vector that continues to look like a blindspot in Twitter’s rule-book is sex.

While the company does include both ‘gender’ and ‘gender identity’ among the many categories it stipulates that users must not direct harassment, at or promote violence against, it does not offer the same shield based on a user’s sex. Which appears to have resulted in instances where Twitter has deemed tweets containing violent misogyny to not be in violation of its rules.

Last month a Twitter UK public policy rep told the parliamentary human rights committee, which had raised the issue of the violent sexist tweets, that it believed the inclusion of gender should be enough to protect against instances of violent misogyny, despite having demonstrably failed to do so in the selection of tweets the committee put to it.

We’ve asked Twitter about its continued decision not to prohibit harassment and threats of violence against users based on their sex, as well as its ongoing failure to ban Nazis and will update this report with any response. Update: A Twitter spokeswoman has now sent us this statement:

The Twitter Rules exist to help ensure everyone feels safe expressing their beliefs and we strive to enforce them with uniform consistency. Our Hateful Conduct Policy does not allow people to promote violence against or directly attack or threaten other people on the basis of certain protected categories such as sexual orientation, gender and gender identity among others. Meanwhile our Violent Extremism policy expressly states that a user may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people. This includes – but is not limited to threatening or promoting terrorism.

In addition to editing down the wording of its rules, Twitter says it has thematically organized them under three new categories — safety, privacy, and authenticity — to make it easier for users to find what they’re looking for.

Though it’s not quite as at-a-glance clear as that on the rules page — which also includes a general preamble; a note on wider content boundaries; a section dealing with spam and security; and an addendum on content visibility restrictions that Twitter may apply in cases where it suspects an account of abuses and is investigating.

But, as ever, algorithmically driven platforms are anything but simple.

Hideously wordy T&Cs have of course been a tech staple for years so it’s good to see Twitter paying greater attention to the acceptable conduct signals it gives users — and at least trying to boil down a clearer essence of what isn’t acceptable behavior, albeit tardily.

But, equally, refreshed wording of what’s unacceptable makes it plainer that Twitter retains stubborn blind-spots that allow its platform to be a conduct for targeted racial hatred.

Perhaps these blindspots are commercially motivated, in the case of far right ideologies. Or perhaps Twitter’s leadership is still so drunk on its own philosophical koolaid it really has fuzzed the lines between fascism and, er, humanity.

If that’s the case, no pithily written rules will save Twitter from itself.

Don’t forget, this is a company that has been promising to get a handle on its abuse problem for years. Including — just last year — making a grand stance about wanting to champion ‘conversational health‘.

Yet it still can’t screw its courage to the sticking place and say no Nazis.

Twitter’s multi-year struggles to respond to baked in hate might be farcical at this point — if the human impacts of amplifying racial and ethnic hatred weren’t a tragedy for all concerned.

And had it found a moral compass when it was first being warned about the rising tide of amplified abuse, it’s entirely possible one of its most high profile users might not be a geopolitical mega-bully known to retweet fascist propaganda.

Chew on that, Jack.