Twitter, police and law courts are waking up to the ugly abuse that women endure online, with systems, algorithms and trials closing in on perpetrators

Reporting abuse is becoming easier (Image: Guido Mieth/Getty)

“CAN’T wait to rape you.” That’s one of the anonymous messages sent earlier this year to Janelle Asselin, a comic-book editor and writer based in Los Angeles.

Asselin had written an article criticising the cover of the comic book Teen Titans #1. The response: a slew of tweets and comments from people questioning her credentials, calling her unprintable names and sending rape threats. “Most of the women I know with a solid online presence get them regularly. This is just a thing we are forced to deal with,” Asselin later wrote in a blog post.

Most of the women I know with a solid online presence get rape threats regularly


Asselin’s story is not unusual. Concern over such online harassment has escalated in the wake of Gamergate, a movement that purports to call for better ethics in game journalism but has become known for ugly attacks aimed at its detractors.

Now, it seems, the problem might have finally peaked. Social-media companies, activist groups and even federal judges are rethinking how to best handle online abuse – and how to make internet users feel safe.

“The goal of these online mobs is to scare you enough to silence you,” says Allyson Kapin, founder of online-marketing firm Rad Campaign and the Women Who Tech network in Washington DC. “People really want to see social networks step in and begin to take a stand against online harassment.”

Twitter in particular has stepped up its game. In November it teamed up with Women, Action, and the Media, a gender-justice group that will study how the company handles abuse problems and suggest possible improvements. And last week, the firm announced changes that will make it easier to flag problematic messages and accounts.

Attention was drawn to the issue of social-media abuse this year by several high-profile cases: game developer Brianna Wu was forced to flee her home; feminist media critic Anita Sarkeesian was scared away from speaking at Utah State University over death threats; and actress Zelda Williams was driven to social-media silence after nasty messages about her father Robin Williams’s suicide.

But such harassment is not restricted to a few unlucky targets – a study by Pew Research released earlier this year revealed that 40 per cent of internet users have been harassed online. More than half said that they did not know the perpetrator, and 66 per cent said the most recent abuse occurred on social media, rather than in emails, comment sections or online games.

Forty per cent of internet users have been harassed online. More than half did not know the perpetrator

Twitter’s upcoming changes make it easier and faster to remove the harassers. Before, users had to fill out a nine-part questionnaire to report an offensive tweet; many of these questions have now been removed or streamlined. In addition, the company is attempting to better tackle the sheer volume of complaints by prioritising tweets that are flagged by large groups of people.

Artificial intelligence could help, too. At the Massachusetts Institute of Technology, Karthik Dinakar has developed algorithms that can detect abusive speech in online comments. Such a program could feasibly be implemented in any of the major social networks. Dinakar suggests that software such as his could encourage users to rethink aggressive messages before they hit “send” by, for example, imposing a 30-second waiting period or by querying a potentially troublesome message (“Do you really want to say that to 600 followers?”).

“People say things online that they would never say face-to-face or in person,” says Dinakar. “We need better tools not just for monitoring, but also to tell people what good digital citizenry is all about.”

Social networks can only do so much, however: deleted accounts can be quickly replaced by new ones. And many police departments are unsure how to investigate internet threats or, worse, dismiss them as less serious than real-world threats. In her book Hate Crimes in Cyberspace, University of Maryland law professor Danielle Citron argues that online harassment is a civil-rights issue that faces the same stigmas that stymied sexual-harassment cases in the 1970s.

Better police training could be a big help. At the College of Policing in the UK, a new internet-friendly curriculum will teach officers to take these reports more seriously and advise victims on how to collect evidence. Others have suggested equipping law-enforcement agencies with the spyware tools necessary to search for and ensnare offenders.

Some of the legal questions might be answered in Elonis vs United States, a case in which abusive comments made on Facebook by a man about his wife are being scrutinised by the US Supreme Court. Although the case does not directly address mob-like harassment, such as that directed against women by Gamergate, it could signify a shift in how US law defines free speech online.

While the court deliberates, private companies have an opportunity to be agents of change, says Mary Anne Franks, a law professor at the University of Miami in Florida. They are not required to enforce free speech, but can instead set their own rules for what is permissible on their platform.

“We have this tremendous potential at our disposal,” Franks says. “A company like Google or Twitter or Facebook could easily revolutionise how we engage in discourse.”

Leader: “Stamping out online abuse will take a concerted effort“

Five ways to protect yourself online If you are being threatened online, there are things you can do to protect yourself. Save everything Even if you would rather just delete the threats, it’s important to save copies of everything. Those documents could help authorities to find and prosecute the offender. Report the abuse Most social-media sites offer users a way to report harassment. You can also file a complaint with the police or online at the Internet Crime Complaint Center if you are in the US. Filter it San Francisco engineer Randi Harper got fed up with negative reactions to a blog post she had written about her experiences with sexism and harassment. She built “Good Game Auto Blocker”, which automatically finds Twitter users associated with Gamergate mobs and adds them to a block list. Hire a detective If the situation gets really bad, firms such as Cyber Investigation Services in Tampa, Florida, specialise in exposing anonymous harassers through psychological profiles, internet forensics and decoy websites. Tell their mum Australian video-game journalist Alanah Pearce sometimes receives threats on her public Facebook page. When she realised that many of her trolls were just kids, she started tracking down their mothers’ profiles and sending screenshots of the concerning messages. One shocked mum forced her son to send Pearce a handwritten letter of apology.

This article appeared in print under the headline “Fight back against the hate”