Some of Riot’s experiments are causing the game to evolve. For example, one product is a restricted chat mode that limits the number of messages abusive players can type per match. It’s a temporary punishment that has led to a noticeable improvement in player behavior afterward —on average, individuals who went through a period of restricted chat saw 20 percent fewer abuse reports filed by other players. The restricted chat approach also proved 4 percent more effective at improving player behavior than the usual punishment method of temporarily banning toxic players. Even the smallest improvements in player behavior can make a huge difference in an online game that attracts 67 million players every month.

Lin and the Riot team cite plenty of statistics to back the idea that their experiments and disciplinary measures are working. But it’s often tough for individual players—such as myself—to tell whether we are encountering fewer toxic players on the whole. Still, I continue to witness players frequently reporting toxic behavior and encouraging other players to do the same, which suggests they believe in the system.

Riot is not alone in collecting data about human behavior on such a massive scale. Tech companies such as Amazon, Google and Facebook also commonly test thousands or millions of customers’ reactions to changes in the popular online services each company provides.

“If those processes could at least be opened to academic researchers — or at least to observation — research in human behavior would advance very rapidly and change the character of how research could be done,” says Brian Nosek, a social psychologist at the University of Virginia. “You could imagine with this sort of iterative process that science would just come out, boom, boom, boom.”

Nosek helped pioneer the big data approach to social psychology. As a graduate student at Yale University in 1998, he helped set up a virtual laboratory through a website called Project Implicit during the early days of the modern Internet. The Project Implicit website hosted a series of fun online tests and questions that helped Nosek and his colleagues survey people’s biases about race, gender or sex. “In the first three days, I collected more data than what I could collect in an ordinary lab throughout my entire career,” Nosek says. “Since then, 16 million study sessions have been completed on the site. That has totally changed my area of research, because I could get so much data about particular psychological effects.”

The Project Implicit website currently gets about 20,000 participants per week, yet that number likely pales in comparison to the number of people using Amazon or Facebook services every second. He and other academic researchers remain frustrated by their lack of access to such private data troves. “I don’t have the impression that many are particularly open,” Nosek says.

Riot could be an exception. Last year, the company launched six research collaborations with universities, including a project with the University of York in the UK that looked at how the names of League of Legends gamers reflected real-life characteristics. A collaboration with MIT aims to measure teamwork among five strangers on the same League of Legends team and develop a “collective intelligence” test that can predict performance on certain tasks.

“They’re the only game company being so public about their research,” says Madigan, the games psychology expert. “They almost use it as a marketing tool to say, ‘Hey, we’re trying to make our community better and your experience with the community better.’”

For example, the experiments may have helped Riot avoid controversy that could have been sparked by the idea of running experiments on unwitting gamers. “Most of their research is about how do we get people to not be assholes,” Madigan says. “Who’s going to object to that aside from hardcore trolls?”

By comparison, Facebook was not so lucky when it collaborated with Cornell University researchers on an “emotional contagion” study that was published in the journal PNAS in 2014. Many Facebook users were outraged that the experiment had tweaked Facebook news feeds to reduce the visibility of either positive or negative emotional content posted by friends; changes that led to cries that the network was manipulating the emotions of its unwitting users. People questioned the lack of informed consent and whether such an experiment should have cleared review by an institutional review board (IRB) that acts as the independent ethics committee for university research.

In fact, Cornell University’s IRB had taken a look, but concluded that the study did not require full review. Why? Because it was Facebook’s team alone that carried out the experimental manipulation of news feeds and collected the results before handing them over to Cornell’s main researcher. The ethical regulations for academic research don’t apply to “human subjects research” conducted by companies alone, writes Michelle Meyer, director of bioethics policy in the Union Graduate College-Icahn School of Medicine at Mount Sinai Bioethics Program.