For its next move in the apparently unending war against threats, abuse, and harassment on its service, Twitter is expanding its definition of prohibited behavior and testing a new feature that automatically identifies abusive tweets based on various criteria such as the account's age.

"Our previous policy was unduly narrow.'

The company has updated its policy prohibiting violent threats from covering only "direct, specific threats of violence against others" to a new wording that includes "threats of violence against others or promot[ing] violence against others." It claims this will give it more scope to stamp out abuse, with the company's director of product management, Shreyas Doshi, commenting: "Our previous policy was unduly narrow and limited our ability to act on certain kinds of threatening behavior."

Doshi adds that the company is also introducing extra tools to deal with violations, with a new option letting members of staff lock abusive accounts for extended periods of time. This is in addition to existing tools that force users to delete abusive tweets or verify their account with a phone number. "This [new] option gives us leverage in a variety of contexts, particularly where multiple users begin harassing a particular person or group of people," says Doshi.

Abusive tweets will be identified using various criteria such as age of the account

Twitter is also testing a new feature that will help it identify abusive tweets and "limit their reach" on the service. The tool will spot infringing messages based on a number of criteria, such as the age of the account (trolls often start new accounts specifically to abuse or threaten someone) and whether or not a tweet is similar to those previously identified as abusive by moderators.

Twitter did not specify what "[limiting] the reach" of tweets entails, but as Doshi notes, the product will "not affect your ability to see content that you’ve explicitly sought out." This might mean that tweets identified by the new feature do not appear in a user's mentions, but can still be seen on the poster's timeline. Doshi adds that the feature does not identify tweets as abusive merely because they are "controversial or unpopular."



A graphic from Twitter shows how being locked out of an account might look to a user.

However, it’s clear that there’s no silver bullet for dealing with abuse. Despite Twitter's efforts, opportunities for bad behavior on the site seem to be hardwired. Yesterday, for example, the company announced a new setting allowing people to receive direct messages from anyone, regardless of whether or not they follow them. Twitter touted the feature (turned off by default) as an opportunity for better communication, but many people pointed out that for some users — specifically women and people of color — it was just another invitation for abuse.