“It was great,” said Haberern in an interview with The Washington Post. “I was talking [trash], they were talking [trash],” he said, adding that such antics are typical and understood to be part of the culture.

Then, Haberern said, the tone of the conversation shifted dramatically. The other gamers started asking him whether he had ever testified in court or murdered anyone.

AD

“They said they were from Maryland and that they were going to come and kill me,” he said.

AD

By then it was 3 a.m., and Haberern decided to quit. One of the gamers in the party then sent him a message via Xbox Live. It contained his home address. Next his house phone rang, then his mother’s cellphone. A message appeared on his TV screen from one of the party members — it was asking why he didn’t answer.

“I felt almost unsafe in my own home, which is not a feeling I like to get from playing Xbox Live,” he said.

Haberern contacted Microsoft, which makes Xbox, via its website and reported what happened. Unsatisfied with that process, he then typed a Reddit post, which would go viral, asking what recourse was available to him. The varied and ultimately unsatisfying answers centered on a common theme: There was no good solution.

Toxic behavior in competitive activities is not a new development, nor is it exclusive to video gaming, as social media users can attest. But its persistence amid a rapidly rising medium — both in terms of users and revenue — spotlights the question of why undesirable or, in some cases, criminal interactions have been so difficult for the video-game industry or law enforcement to eliminate. Now, with technological advances in online multiplayer games and video gaming’s increased prevalence worldwide, a growing percentage of the population is becoming unwittingly exposed to a slew of abusive acts that are only becoming more visible.

AD

AD

While game publishers, console makers, online voice-chat applications and even the FBI are aware of these issues and working to confront them, complications stemming from modern technology and gaming practices, freedom of speech concerns, and a lack of chargeable offenses on the legal side make toxic elements a challenge to extinguish.

As a result, and with more and more attention paid to the rapidly growing gaming and esports industry, news cycles are more frequently dotted with incidents like that of Anthony Gene Thomas, 41, of Broward County, Fla., who was arrested on Jan. 20 and faces 22 counts of child pornography, unlawful sex with a minor and other related charges after allegedly using the game Fortnite to solicit sexual encounters with underage players. Authorities in Florida say there may be up to 20 victims, according to local reports.

Vox-owned site the Verge recently compiled multiple accounts of players who claimed to be harassed by others reenacting slavery-era behavior by targeting, rounding up and killing black characters in the massively popular and critically acclaimed game “Red Dead Redemption 2,” which takes place at the start of the 20th century. A November story by NPR also reported that hate groups were actively using video-game chats to recruit new, young members.

AD

AD

Gamers have also overheard real-world criminal activity conducted and captured on voice chats. In November, Daniel Enrique Fabian, 18, of New Port Richey, Fla., was arrested after a fellow gamer overheard Fabian allegedly raping a 15-year-old girl while playing Grand Theft Auto on PlayStation 4. Even though such incidents are not caused by the games themselves, some industry insiders say their status as a tool for bad actors engaging in toxic and criminal behavior online could significantly slow the growth of the video-game industry, much in the same way it did with social media platforms such as Twitter and Facebook.

Toxic origins

Though most toxic behavior online falls short of a felonious standard, gamers remain exposed to, and targeted by, all manner of verbal abuse. Such abuse has been particularly felt by women in the gaming space, even after a 2014 incident known as Gamergate — a widespread “Internet culture war” that featured brutal, orchestrated harassment campaigns against women — spotlighted the issue.

AD

AD

“They would tell me I’m fat and ugly and shouldn’t be on the Internet,” recalled Kristen “KittyPlays” Valnicek, 26, a top streamer and gamer, about how she was treated by other gamers online while growing up. Valnicek has close to 28 million views on her Twitch channel and won the Fortnite Korea Open last month with a teammate.

“The Call of Duty and Halo lobbies were absolutely disgusting,” she said, with people verbally abusing and threatening her.

Her parents would routinely notice her dejected look after playing, and things got so bad at one point that she went to her local police chief over threats of swatting, the practice of placing a bogus police call with the intent of catalyzing a SWAT team to respond to a person’s home. Such an incident in 2017 resulted in the death of Kansan Andrew Finch, 28, with the perpetrator, California-based Tyler Barriss, 26, ultimately pleading guilty to 51 federal counts in November 2018 and facing a sentence of 20 to 25 years in prison.

AD

AD

For gamers like Valnicek, fighting back against in-game abuse is tricky. Avoiding it altogether is virtually impossible if they want to enjoy a multiplayer game as it is meant to be played. Modern video gaming revolves heavily around multiplayer titles, such as Fortnite, League of Legends, Call of Duty and Overwatch, that rely on interpersonal communication to coordinate strategies, much like a real-world sports team. Unlike a game of pickup basketball, however, online games will match-make teams out of random players, identifiable only by pseudonyms, thereby giving strangers a direct channel to another player’s headset via the game’s voice chat. While such random interactions can be cordial and even lead to friendships, the smaller percentage of negative instances can be lasting and detrimental.

“It certainly looks like the effect on adolescents and children in general is quite negative,” said Joy Osofsky, head of pediatric mental health at Louisiana State University. “There’s a higher incidence of depression … those who are bullied can become bullies.”

She added that there have been reports suggesting a link between online bullying and increases in depression, and perhaps even suicide, noting females experience higher incidences of depression overall than men. There is also a tendency for such behavior to go unreported, particularly by younger gamers. The gaming audience also tends to skew young, with almost three-quarters of Americans ages 14 to 21 having either played or watched multiplayer online games or competitions in the previous year, according to a 2017 poll by The Washington Post and University of Massachusetts at Lowell.

Young people “don’t tend to report it, so it can go on and [they can] be victims for long periods of time,” Osofsky said, especially if parents don’t ask. “It’s still hard for young people, and older people, to talk about the fact ‘I have a problem and need help.’ ”

AD

AD

Anonymity also plays a major role as a motivator and enabling factor for toxic players, according to Osofsky and industry insiders.

“Why don’t people drive 100 miles an hour past a police car? Why do they brake? Why don’t they break out a line of coke in front of a cop? Because they know they’ll be caught,” said Michael Pachter, a research analyst at Wedbush Securities. “If you think you’ll be caught, you won’t be toxic.”

One immediate solution for gamers who don’t want to deal with a toxic player’s verbal harassment is to mute them, but that then compromises the team’s effectiveness in the game. This is seen as unsatisfactory even to those within the game-publishing industry. Fair Play Alliance, a “cross-industry initiative” of more than 90 gaming companies to share information in hopes of combating toxicity and improving gaming experiences, states on its website that “communication is often a fundamental part of gameplay or teamwork. ‘Just mute’ also puts the requirement to act on the person being harassed, after the damage has already been done.”

Players can also report toxic behavior using an in-game menu option. Both Xbox Live and PlayStation Network offer reporting options as well. Such reports can lead to abusive players seeing their accounts banned by certain games. In Korea, game publisher Blizzard recently banned more than 18,000 Overwatch accounts for toxic behavior. Those bans often come with an end date, however, and further recourse is, pervasively, far more opaque.

A mess proving hard to clean up

Game publishers have repeatedly tried to address the problem of toxicity but face a head wind of challenges common to Internet-based forums, especially those permitting anonymity.

AD

AD

Riot Games announced it would be studying and trying to reduce toxicity more than five years ago and began adding in-game tips to encourage positive interactions. That brought down verbal abuse by about 6 percent and offensive language by 11 percent, according to reporting by Scientific American, based on figures provided by Riot. Riot also implemented a system in which players were rewarded for sportsmanship and virtuous behavior, incentivizing kindness with in-game goods.

Blizzard also self-reported a success, when Overwatch Lead Designer Jeff Kaplan posted statistics showing decreases in toxicity of more than 25 percent in both players being abusive and matches containing abuse. This came after adding features that encourage positive comments and allow gamers to create filters for whom they match with online.

Ubisoft, a publisher that has released Tom Clancy’s Rainbow Six and Assassin’s Creed games, began issuing instant bans for Rainbow Six Siege players in July for what the company regarded as offensive or abusive speech. Ubisoft halted the practice in December, returning to manual, instead of automated, enforcement. The company states in its terms of use that it “does not undertake to monitor or remove” content from its users.

Activision Blizzard (Call of Duty, Overwatch), Epic Games (Fortnite), Ubisoft (Rainbow Six), Xbox, Twitch and the Entertainment Software Association (ESA) all declined to comment for this article.

Though some of these tactics might offer hope, game publishers have struggled to decide on broader strategic issues, such as how to balance free speech with ensuring a safe environment, an issue shared by old-guard social networks like Facebook and Twitter.

AD

AD

Carlos Figueiredo, one of the founders of Fair Play Alliance who now works as the director of community trust and safety at Two Hat Security, believes identifying a mechanism to combat toxic elements would be a “rising tide that would benefit everyone” in the video-game industry. He added that he has been encouraged by the amount of collaboration he’s recently seen from software developers on the issue.

"The big change that has happened is folks have clued into the fact that community [in games] is so important, that [toxicity] has a cost to the community,” Figueiredo said. “It’s … crucial, to the business and to the health of the community and players.”

With the gaming industry booming — game sales topped out at a record $43.4 billion in revenue in 2018, according to an ESA/NPD Group press release, up 18 percent from 2017 — publishers would seem to have little incentive to put their houses in order. But industry observers see a potential brewing storm for the gaming industry. Asked about potential consequences, Pachter said, “same as Twitter, stalled growth.”

"People who have never been on [Twitter] hear it’s a nasty place and don’t want to expose themselves,” he said, in contrast to Facebook, which he said seemed like a safer community to would-be users.

The challenge in eliminating toxicity, Figueiredo believes, is that it doesn’t stem from the games themselves but rather the still-young culture of communicating over the Internet, where people are connected with others from different backgrounds, cultures, languages and interests. What’s accepted as common in one part of the world may be seen as unwelcome or negative in another.

“We haven’t been connected for that long, overall,” Figueiredo said, noting that a lack of social consequence was as much of a cause for the persistence of toxicity as anonymity. “We’re still figuring out a lot of things as we go.”

Pachter says it will probably take a company to create a two-factor authentication system that would require gamers to provide multiple forms of identification, such as an email address and cellphone number, as a way to combat toxicity.

“I think it’s going to get worse before it gets better,” he said, expressing surprise at how little impact the 2017 fatal swatting incident of Gaskill had on gaming companies’ efforts to combat toxicity.

A lack of legal options

Short of actions taken by game publishers and gaming platforms, parties trying to wrestle with toxicity are left in a chase mode, with few ways to punish perpetrators. Ambiguities within the U.S. legal system have played a role in constraining the efforts of law enforcement during the era of online gaming.

When asked for comment, an FBI spokesman relayed that the unit that handles video-game-related threats “advised they are not in a position to discuss such matters yet.”

The spokesman said the FBI has three ways to report online threats, via its IC3 (Internet Crime Complaint Center) website, public access line (1-800-CALL FBI) and local field offices.

Even when the FBI and law enforcement get involved, prosecuting cases emanating from online threats can be “a little tricky,” according to Barbara L. McQuade, professor from practice at the University of Michigan Law School and former U.S. attorney for the Eastern District of Michigan.

Citing the Supreme Court ruling in Elonis v. United States, she said federal prosecutors are limited in what charges can be brought against online harassment.

“It’s an evolving area,” McQuade said, which sometimes leaves prosecutors feeling as though they don’t have the legal “tools,” or precedents, to make a case.

McQuade said her office did charge someone for making a specific threat online under a statute that addresses making threats using any “instrument of interstate or foreign commerce.” One such instrument mentioned in this statute is a telegraph.

She said in most cases similar to Haberern’s situation, where a verbal threat is followed by phone calls and harassing messages, the FBI would visit the person alleged to have made a threat so as to assess its seriousness, in what she called a “discussion visit.”

“The more specific the harm, the more specific the target, the more likely it is to be charged,” she said.

McQuade said prosecutors, like the online platforms, must also make tough choices in this realm as they balance protecting the public with what she called a “cherished tradition in this country of protecting free speech.”

“Criminal charges are not the remedy in every case. Sometimes it’s mental-health assistance, sometimes it’s intervention, sometimes it’s a dissuasion interview,” she said. “But if law enforcement feels there is a true risk to public safety, or even one individual’s safety, that’s when they will typically intervene.”

With no clear methods to effectively monitor, halt or eliminate toxic behavior, many in the gaming community have simply tried to ignore it and continue playing anyway. Many of the titles cited most for toxic players remain the industry’s most popular.

Following the death threats, Haberern said he did not report anything to police, saying he likes to avoid involving them in his life, but questioned whether Microsoft, which responded to Haberern subsequent to The Post’s interview, is creating a safe environment for kids.

As for Haberern, he was back to playing games the next day.

“I played right after that. But I definitely don’t accept invites from people,” he said.