The team took the comments sections from websites CNN, Breitbart and IGN, looking at the contributions of 1.7 million users over 18 months as well as the up and down votes each post got. The team then dug in to work out what differentiates a banned user from those who are deemed to be worthwhile members of the community. it turns out, perhaps unsurprisingly, that trolls are pretty easy to spot.



For instance, trolls are more likely to write less coherently and often with more profanity than other users. They're also found to concentrate their discussions in a narrow group of threads and often generate more responses than less inflammatory comments. The team thinks that this latter point is because they're adept at "luring others into fruitless, time-consuming discussions."



Naturally, while a little obnoxiousness when a new user joins a community is tolerated, this patience is worn out over time, leading to an increased rate of post deletion and banning. Familiarity also breeds contempt, and trolls are understood to post significantly more frequently than other members of a site. For instance, one candidate for banning had written 264 missives on CNN, far in excess of the average, which was 22.

Loading all of these characteristics into a computer, the team was able to cook-up an algorithm that they claim will identify trolls with a success rate of 74 percent. Now, the researchers believe that a lot more work needs to be put in before comment services will be able to shoot down negative comments before they're read, but it is, at the very least, a promising start.