Anonymity, we often assume, is the breeding ground for bad behavior on the internet. Among the gatekeepers of comment sections and social media sites, the conventional wisdom is that anonymity empowers bullies to voice hateful opinions without consequence. When unmasked by real-name policies, the theory goes, these trolls will slink back to their caves, taking the vitriol from Twitter, Facebook and other social media with them.

Not true, says Lea Stahel, a sociology researcher at the University of Zurich.

Stahel and a team at the university’s Institute of Sociology wanted to know whether anonymity really encouraged the worst kind of behavior seen in online “firestorms.” These are moments when a public figure or group evokes the ire of commentators, who direct thousands or millions of negative messages at their subjects. The harassment of women in the video-gaming community, known as “Gamergate,” and the recent attack on the Ghostbusters actress Leslie Jones are just two examples.

In research published this June in the journal PLoS One, Stahel studied comments on online petitions published on a German social media platform between 2010 and 2013. The data included 532,197 comments on about 1,600 online petitions. Commentators could choose to be public or anonymous. Contrary to expectations, the commentators with the harshest words during mass public attacks were more likely to be the name-identified ones than the anonymous ones (less than a third of commentators kept their names private).

That suggests we may need to rethink our efforts to encourage or enforce civility online. “Our results also do not support claims that prohibiting online anonymity will make the online world a better world,” Stahel explained by email. “The main point is that prohibiting anonymity online will not settle this ‘problem’ of firestorms.”

Indeed, for some trolls, online aggression is rewarded in their social networks, and is often a deliberate public signal. People are actually trying to enforce social norms against a perceived violation by a public figure or group. That means individuals are rewarded and perceived as more credible in their group once they are identified, argues Jurgen Pfeffer, a computer science professor at Carnegie Mellon.”In such structures it is very likely that, if somebody says something aggressive, the majority of the group says ‘Yeah,'” he explained by email.

Pfeffer cautioned against generalizing the findings too broadly. Anonymity may lower the threshold for aggression in some cases, and encourage the use of bots, automated “users” that amplify trending topics (Twitter has admitted that about 8.5% of its users may be bots).

But banning anonymity, the authors conclude, is unlikely to solve aggressive firestorms, and could exacerbate them by creating group dynamics where groups are more likely to follow others who share their beliefs. Any solution will be a ”tightrope walk between securing free expression of opinion and preventing hate speech.”

The image above was taken by Working Word and shared under a Creative Commons license on Flickr.