Anyone who has ever played a game online with strangers knows how toxic it can get: there are players who seem to enjoy being jerks more than the game itself. Racist, sexist, and homophobic slurs are the norm on Xbox Live and in-game chat. But it turns out that those extreme cases, the players who just love to be mean, only account for a very small percentage of the negative behavior, according to Jeffrey Lin, the lead designer of social systems at League of Legends developer Riot Games. More than 90 percent of the vitriol comes from normal players who occasionally act out while playing, and it’s Lin’s job to figure out how to stop those incidents from happening.

"How do you get them, when they have a bad day, to not rage in the middle of a game? That was the real problem," he says.

Lin started his career as a PhD student at the University of Washington before switching to Riot, where he and his team have been using neuroscience and machine learning in an attempt to curb the rampant toxicity that tends to plague online communities. Because it's possible to gather so much data about the way players think and act, online games like League are also an ideal place for researching how these behaviors work. In fact, when Lin first started the job in 2012, there was so much information it proved to be a bit overwhelming. "We were looking over the data and trying to see how we even start working on this problem," he says.

"The vast majority of players behave really positively."

The conventional wisdom at the time was that most abuse came from a few particularly toxic players, and that if you simply banned them from the game, the problem would be solved. But after looking at the data, Lin realized that wouldn't actually accomplish much. Those kinds of overly toxic players only account for about 1 percent of the community, and even if you ban them all, it still only reduces negative behavior like harassment by 5 percent. The problem was everybody else. "The vast majority of players behave really positively, on average," he explains. "But when they have a bad day at school or work and then go into the game, that's when they act in a negative way."

Fixing it involved multiple solutions. For one thing, the game has added an honor system, in which feedback from the community shows up on a player's profile, so that you can see if they're someone you'd want to play with. When you play with or against someone, you can award them medals saying that they're a good teammate, or just plain friendly, which proved to be very effective for the majority of players. "Reputation means a lot," says Lin. And it's especially important in a game like League, where the massive player base of more than 67 million people means that there's a good chance you might play someone once and then never encounter them again. The honor system gives players an incentive to be nice even to strangers.

Players can also file reports when someone acts in an offensive or harmful way, and the team at Riot Games uses machine algorithms to find racist, homophobic, and otherwise abusive language in the in-game chats. The machines are still learning, but they can already decipher multiple languages and parse very specific League-only jargon. But more important than just finding the bad behavior is figuring out how to reform it. Lin says that the 1 percent of seriously toxic players have no real interest in changing, but pretty much everyone else will actually stop their negative behavior if you act quickly. The trick is being open with them.

"Reputation means a lot."

Players are issued warnings for their behavior based on reports both from other players and the algorithms. For those players who were regularly reported for bad behavior, Lin's team found that about 50 percent of them didn't offend again if Riot moderator explained what they actually did wrong, and that number jumped to 70 percent if the explanation included evidence like chat logs. And for the remaining 30 percent, while they might offend again later on, it only takes being penalized three times to get them on the straight and narrow. At that point the reform rate jumps to 90 percent.

The developer also introduced a "tribunal" system in which random members of the League community are chosen to judge whether a consistently reported player should be penalized, putting some of the power in the hands of the people. A recent update lets those judges take into account both negative and positive activity, providing a more balanced perspective. Penalties range from email warnings to lengthy bans, and a history of good behavior can make judges more lenient to new offenders. And while the online abuse can still be bad, it’s now much less frequent. According to Lin's data, after three years of work, today only about 2 percent of League games feature any form of negative behavior.

"We're not alone in this space."

Of course, the problem of online abuse and harassment extends far beyond League. Similar problems can be found in other games, social networks, or pretty much anywhere else where people gather on the internet. So even though all of this work has made League a better game overall, Lin believes that it could also help out the online world as a whole. "We realized that if we really want to improve these online interactions, we have to go beyond League of Legends," he says. "We're not alone in this space."

To that end, Riot has been meeting with other game developers, internet companies, and even schools to share data and best practices. Lin believes that in five years every game developer working on a multiplayer title will have a team similar to his at Riot. And after a few years, who knows, maybe that teenager from Idaho yelling racial slurs in Call of Duty matches will become the exception rather than the rule.

"Wouldn't it be cool if all of the other studios joined in and did these kinds of things?" Lin asks. "Then when we have our next games, we'll already have positive communities."