For indispensable reporting on the coronavirus crisis, the election, and more, subscribe to the Mother Jones Daily newsletter.





A notorious internet troll—known for allegedly harassing feminists on Twitter and uploading YouTube clips edited to depict women saying degrading things—has forced a major social-media site to choose between free speech and punishing an alleged harasser. This latest controversy has heightened the online debate about whether tech giants are doing enough to protect victims of harassment.

Earlier this month, at least five women contacted Xavier Damman, the CEO of Storify, to complain that a user who goes by the handle “elevatorgate” was harassing female users via Damman’s popular social-media curation site. Storify allows users to compile public content from a wide spectrum of social-media networks, including Twitter, Facebook, and Flickr. The women told Damman that elevatorgate—who identifies himself as a “human rights activist” in his Storify bio—was republishing dozens, sometimes hundreds, of their tweets, triggering notification emails that flooded their inboxes.

Although elevatorgate’s use of Storify was limited to triggering a deluge of email alerts, the women who complained about him say he has a history of sending abusive and misogynistic messages on other social networks. Elevatorgate’s Twitter account is suspended, but his YouTube page includes a video of Rebecca Watson, a 32-year-old New Yorker who runs Skepchick, a site about feminism and atheism, edited to make it sound like she’s saying she “had sex with Richard Dawkins,” the famous evolutionary biologist and author. Another video on elevatorgate’s YouTube page has been edited to make it appear that a female writer says, “heck yeah, I want to hook up” and “would you like to come up to my room now and have sex?”

Storify’s terms of service specify that users can’t “publish, submit or transmit any content” that promotes “harassment.” But when several of the women raised the point that this behavior might constitute harassment, Damman tweeted back with a link that cited Voltaire’s vow to defend freedom of speech to the death. “[I] can’t do anything about [elevatorgate],” Damman said, other than “momentarily block” elevatorgate’s ability to send email notifications. Later, as the controversy heated up, Storify did turn off elevatorgate’s ability to send notifications. It also created a mechanism for users to flag inappropriate content. But elevatorgate’s Storify account still stands.

Damman says he still believes elevatorgate’s right to compile content is a free speech issue, and adds that Storify’s rules about harassment were written to cover people who are “writing original content that is upsetting people and mentioning those people by name.” That means that simply collecting tweets—not adding original commentary—doesn’t necessarily qualify as harassment, as far as Storify is concerned.

Storify isn’t the only tech company to cite the principle of free speech to defend its refusal to remove allegedly harassing content. But companies aren’t obliged to honor the First Amendment the same way the government is—they have the legal right to kick out or ban anyone they don’t want using their service.

“The idea that a social-media network should be entirely neutral is a myth,” says Jaclyn Friedman, the executive director for Women, Action and the Media, a nonprofit that advocates for gender equality in the media. “Neutral platforms are only neutral for straight white dudes. These companies need to make a decision: Do I want to be making a money off of a platform where abusers and harassers feel more comfortable than the abused and harassed?”

Online harassment can have serious consequences. The International Journal of Cyber Criminology says aggressive online conduct can trigger PTSD and, in worst case scenarios, lead to violence and suicide—although not all harassment rises to that level. The federal Violence Against Women Act outlaws cyberbullying but leaves a lot of room for uncertainty; when harassment occurs, it can be difficult to get sites to remove the offending content in a timely manner, if at all. Unsurprisingly, it’s women, minorities, transgender people, and other marginalized groups who bear the brunt of the abuse.

Zerlina Maxwell, a political analyst and contributor to Ebony.com, says she received multiple death and rape threats via Twitter and Facebook after she argued on Fox News that teaching men not to rape is more likely to prevent assault than making sure women have guns. “Social networks don’t do enough, and I’d hate to see them only respond appropriately [only] after someone is injured or harmed,” Maxwell says.

Recently, some social networks have taken action against harassing content. Earlier this month, Twitter issued an apology to female users that was prompted by rape threats against prominent UK-based historian Mary Beard, who in late July had tweeted about her desire to see Jane Austen on the £10 note. The harassment of Beard and her online supporters was so extreme that it triggered a call by some feminists to boycott Twitter outright in a campaign called TwitterSilence. (Other feminists argued that quitting Twitter was equivalent to letting harassers win.) In response to TwitterSilence, Tony Wang, Twitter’s UK general manager, said that “the abuse they’ve received is simply not acceptable. It’s not acceptable in the real world, and it’s not acceptable on Twitter.” (After this statement was issued, Beard received a bomb threat via Twitter.) Twitter has now made it easier for users to quickly report abuse and updated its terms of service to “clarify that we do not tolerate abusive behavior.” The company, however, did not provide more detail to Mother Jones about what happens once harassment is reported.

Facebook, which recently faced criticism for taking hours to remove graphic photos a man posted after allegedly murdering his wife, is experimenting with a different way to deal with harassment: Force harassers to use their real names, instead of letting them hide behind handles. After Women, Action and the Media criticized Facebook in May for failing to take down hate speech against women or remove photos depicting rape and domestic violence, the social network is now requiring sections that contain vulgar and offensive content to be clearly marked, and in some cases requiring the page’s administrator to post with his or her real name. “While it may be vulgar and offensive, distasteful content on its own does not violate our policies,” a Facebook spokeswoman tells Mother Jones. “[But] we try to react quickly to remove reported language or images that violate our terms and we try to make it very easy for people to report questionable content using links located throughout the site.”

Until Facebook responds faster to harassment, Maxwell is taking screenshots and publicly outing men who send her death threats.

“Since we brought this up with Facebook, I think we’ve seen improvements, and we’ve seen a lot of content come down,” says Friedman, WAM’s executive director. “But most of these social-media platforms still don’t place enough emphasis on solving this problem. They’re smart. If they wanted to, they could.” Maxwell, the political analyst who spoke on Fox News, says until Facebook responds faster to harassment, she’s taking screenshots and publicly outing men who send her death threats. “One guy sent me an apology after I did that, because his friends saw his Facebook picture and said, ‘Wait, you actually sent that to someone?'” she says.

As for Storify, Damman says the company is “more than willing” to help prevent and stop harassment on his site, but it’s a “very touchy topic,” and he wants to make sure the company takes the time to do the right thing. “What do you suggest?” he asked. “Seriously, do you have any suggestions?”

In response to questions from Mother Jones, a person claiming to “work with elevatorgate” provided access to a Google document in which elevatorgate addressed allegations that he has harassed women through Storify and other social networks—before later revoking access to the document. “We’ve decided this story isn’t for us,” the intermediary emailed. “If you would like a villain for your piece, I would recommend finding somebody who is actually guilty of something. There are far worse people out there than a man who Storifies people’s tweets.”

In the Google doc briefly viewed by Mother Jones, elevatorgate wrote that he does not use his real name on social media because doing so could make him a target of harassment.

Update, 8/27/2013: Since our initial conversation with the person who provided, and then revoked, access to a Google doc with statements from elevatorgate, a version has been made public. You can see it here.