A couple of hours after the Christchurch massacre, I was on the phone with Whitney Phillips, a Syracuse professor whose research focuses on online extremists and media manipulators. Toward the end of the call, our conversation took an unexpected turn.

Phillips said she was exhausted and distressed, and that she felt overwhelmed by the nature of her work. She described a “soul sucking” feeling stemming in part from an ethical conundrum tied to researching the ills of online extremism and amplification.

Paris Martineau covers platforms, online influence, and social media manipulation for WIRED.

In a connected, searchable world, it’s hard to share information about extremists and their tactics without also sharing their toxic views. Too often, actions intended to stem the spread of false and dangerous ideologies only make things worse.

Other researchers in the field describe similar experiences. Feelings of helplessness and symptoms associated with post-traumatic stress disorder—like anxiety, guilt, and anhedonia—are on the rise, they said, as warnings go unheeded and their hopes for constructive change are dashed time and time again.

“We are in a time where a lot of things feel futile,” says Alice Marwick, a media and technology researcher and professor at the University of North Carolina Chapel Hill. “We're up against a set of bad things that just keep getting worse.” Marwick co-authored Data & Society’s 2017 flagship report, Media Manipulation and Disinformation Online with researcher Rebecca Lewis.

In a way, their angst reflects that of the tech world at large. Many researchers in the field cut their teeth as techno-optimists, studying the positive aspects of the internet—like bringing people together to enhance creativity or further democratic protest, á la the Arab Spring—says Marwick. But it didn’t last.

The past decade has been an exercise in dystopian comeuppance to the utopian discourse of the '90s and ‘00s. Consider Gamergate, the Internet Research Agency, fake news, the internet-fueled rise of the so-called alt-right, Pizzagate, QAnon, Elsagate and the ongoing horrors of kids YouTube, Facebook’s role in fanning the flames of genocide, Cambridge Analytica, and so much more.

“In many ways, I think it [the malaise] is a bit about us being let down by something that many of us really truly believed in,” says Marwick. Even those who were more realistic about tech—and foresaw its misuse—are stunned by the extent of the problem, she says. “You have to come to terms with the fact that not only were you wrong, but even the bad consequences that many of us did foretell were nowhere near as bad as the actual consequences that either happened or are going to happen.”

Worst of all, there don’t appear to be any solutions. The spread of disinformation and rise of online extremism stem from a complex mix of many factors. And the most common suggestions seem to underestimate the scope of the problem, researchers said.

Some actions—like adding content moderators on platforms like Facebook, developing more advanced auto-filtering systems to take down problematic posts, or deploying fact-checking programs to flag and derank disinformation—rely too much on platforms’ ability to police themselves, some researchers say. “It's so easy to start fetishizing the technical aspects of these problems, but these are first and foremost social issues” that are too complex to be solved by tweaking an algorithm, says Lewis.

Other approaches, like media literacy programs, may be ineffective, and place too much responsibility on users. Both sets of tactics ignore messier, less quantifiable parts of the problem, like the polarized digital economy where success is predicated on attracting the most eyeballs, how rejecting “mainstream” truths has become a form of social identity, or the challenges of determining the impact of disinformation.

“It's not that one of our systems is broken; it's not even that all of our systems are broken,” says Phillips. “It's that all of our systems are working ... toward the spread of polluted information and the undermining of democratic participation.”

“We are in a time where a lot of things feel futile.” Alice Marwick, University of North Carolina

The internet is an unreliable narrator, and any attempt to interpret online actions with the same sincerity afforded to those in the real world is fraught. Some seemingly influential accounts, like @thebradfordfile—which has over 125,000 followers on Twitter, and has been cited by outlets like The Washington Post and Salon as an example of far-right thinking—are shams, and only appear to wield influence thanks to paid engagement schemes, tweet-boosting DM rooms, and other means of artificial amplification. The metrics by which we gauge an idea or individual’s worth online are easily manipulable. Likes, retweets, views, followers, comments, and the like can all be bought.