The Hack Harassment initiative — launched in partnership with our sister site Recode, our parent company Vox Media, and Lady Gaga's Born This Way Foundation — is an attempt to find solutions to internet harassment, starting with a series of hackathons through the first half of 2016. Held both online and offline, the sessions will involve members of the tech industry, the media, the nonprofit world, and academia. They’re designed to raise awareness and find potential technological solutions to harassment, which will be presented at the Code Conference that starts May 31st. For a problem that’s inspired a lot of talk and few real solutions, it’s still just talk — but its organizers promise that more change is coming.

Huge numbers of people have met with extreme hostility online, with women, people of color, and other underrepresented groups being particularly vulnerable. According to a 2014 survey by the Pew Research Center, 40 percent of adult internet users have personally experienced some form of harassment. "It really spawned out of our diversity discussion," says Intel CEO Brian Krzanich. "As you kind of take the next step in diversity, you now need to make it a safe and comfortable place for those people to work in — so harassment was a natural next step to go work on." While about half the Pew respondents who had been harassed reported less severe behaviors like being called offensive names or purposely embarrassed, the other half had faced stalking, physical threats, sustained harassment, or sexual harassment. For young users — those between 18 and 24 — the problems were especially pronounced. Seventy percent had been harassed in some way, almost a quarter had received physical threats, and one in five had been sexually harassed. "Our young people are spending more time online than ever before, making it more important than ever to face this problem head on," says Cynthia Germanotta, president of the Born This Way Foundation and mother of Lady Gaga And the Pew study didn’t address some specific, more extreme forms: non-consensually posting nude photos or personal information, hacking into accounts, and "swatting" hoaxes. A Vox- and Intel-commissioned survey of 300 tech industry professionals found that 8 percent had some kind of experience with swatting calls, 15 percent had faced hacking attempts, and 13 percent had had personal information exposed online. While both surveys broadly found that people think online harassment has negative effects, that doesn’t seem to capture the extent of the harm caused by this behavior. "When people start to talk about topics, that's when things begin to change." By far the most common platforms for harassment, according to Pew’s survey, were social networking sites and apps. Two-thirds of people who had been harassed pointed to social media, while smaller numbers cited website comment sections, online gaming, and email. But even when these platforms have consistent anti-harassment policies, they’ve struggled to enforce them. Twitter in particular has been singled out for its anemic response to attacks on users, especially because it’s so easy for harassers to create new accounts as soon as they’re banned. For Recode, Hack Harassment is a natural followup to last year’s Code Conference, which made Silicon Valley diversity a major topic of discussion. "When people start to talk about topics, that's when things begin to change," says Recode co-founder Kara Swisher. Especially after the Gamergate controversy made international news in 2014, though, it can be hard to believe that raising awareness is still necessary. The past few years have seen companies, politicians, and activists denounce harassment and put forward tentative strategies. State and national legislators are attempting to punish some of the most clearly illegal practices — members of Congress have introduced multiple anti-swatting bills, most recently in November by Rep. Katherine Clark (D-MA), who has also pushed the Justice Department to investigate and prosecute online threats more heavily. After heavy criticism, Twitter attempted to make it easier for users to report harassment. Even Reddit, a platform famous for hands-off moderation, began banning its most notoriously vicious boards last year. An entire day of the upcoming SXSW Interactive festival is dedicated to anti-harassment talks.

Your browser does not support iframes.

But Swisher says that in the tech community, discussions often flare up around individual controversies and fade soon after, without meaningful change. "Everybody gives lip service to a lot of things and then nothing actually happens," she says. "And the kind of stuff that happens during online harassment really damages people." She suggests that it’s too easy for people to slip into an abstract debate that pits defenders of free speech against opponents of online harassment. "I think one of the important things is to show people exactly what is happening instead of talking about the bigger issues," she says. In one case, she recalls an interview in which Girls creator Lena Dunham described quitting Twitter because of misogynist abuse. When she saw people criticize Dunham for being thin-skinned, she responded with copies of the tweets Dunham was describing, including threats of rape and other violence. "I said, ‘Okay, is this okay?’ And of course everyone was like — ‘Oh my god, I had no idea.’" People without Dunham’s privilege or visibility, meanwhile, may have their abuse outright ignored, or even be told they’ve brought it on themselves. But Twitter, YouTube, and other platforms are already taken to task so frequently that concrete solutions seem more important than awareness-raising. It’s not clear whether they’ll be participating in the hackathons, or what kind of solutions they might be willing to implement. But Hack Harassment is confident that there are technological solutions to at least some of the problems. Its early suggestions involve blocking the IP addresses of known harassers on sites and giving users more filtering tools, two options that survey participants judged effective. "Everybody gives lip service to a lot of things and then nothing actually happens." Filtering has, in fact, proven one of the best solutions on Twitter so far. The crowdsourced tool Block Together lets users share lists of offenders or automatically stop seeing accounts that raise red flags, and Twitter later adopted shared block lists as an official feature. Some problems also seem to have clear technological causes — game developer and anti-harassment activist Zoe Quinn recently complained that YouTube and Facebook automatically lined her posts with links to tirades like "Zoe Quinn, a vapid idiot." Even if there’s no surefire way to block abuse, platforms could avoid accidentally promoting it. Swisher believes the problem isn’t just that there aren’t enough options right now, but that the ones that exist are either hidden or hard to use. "If I can't figure it out, the average teenager who's getting pilloried on Facebook or Twitter or whatever has no hope of being able to deal with this except to sign off," she says. "And that shouldn't be the only choice you have, to sign off." Game studio Riot, creator of the hugely successful e-sport League of Legends, has made some of the most promising breakthroughs in fighting toxic community elements. A dedicated "social systems" team finds and tests ways to make people act better online — sometimes getting results from things as simple as making voice chat opt-in instead of opt-out. Riot’s solutions don’t apply everywhere, but the studio has shown that basic structural changes can pay off.