Table of Contents

Introduction

When I was 11 years old, I begged my dad to buy me Fallout 3. Now, Fallout 3 is a highly immersive video game with a complex storyline that grapples with difficult moral questions. It also happens to be a game where you can shoot people and watch their heads explode. My dad, knowing this, prudently declined to purchase the game for his 11 year old. So instead I went over to my friend’s house to play Call of Duty, a game where you can shoot people and watch their heads explode. Many modern games feature similar levels or themes of violence. Even Fortnite, the Gen Z pop culture phenomenon that reached 250 million players in 2019, involves killing other combatants with various armaments.

With the pervasive level of violence in many video games, it is no wonder that they are the perennial scapegoat of politicians and concerned parents everywhere. Since their inception, violent video games have been blamed for everything from mass shootings to moral corruption. Is this true? When people play violent video games, are they more inclined to commit violent crimes or participate in antisocial behavior? As it turns out, there is a large body of psychological literature exploring this very question. Let’s take a look.

Part 1: Sound and Fury

Generally speaking, research studies on violent video games fall into one of two camps. There’s the experimental short-term design, and then there’s the longitudinal survey.

Experimental designs basically have one group of people play violent video games, and one group play nonviolent video games. Once the two groups finish playing games, they complete a wide range of tasks or self-reports that are supposed to measure aggression. These tasks and reports vary between studies, but two common ones are the Hot Sauce Paradigm and the Taylor competitive reaction time test (TCRTT).

In the Hot Sauce Paradigm, you get participants to prepare hot sauce for other participants to drink. Presumably, participants feeling more aggressive will prepare larger quantities of hot sauce for other participants to drink. They’ll also choose spicier sauces for the other participants. Similarly, the TCRTT has participants playing a noise for other participants. Again, participants feeling more aggressive will presumably play louder, longer noises for the other participants. Pretty clever, right?

While there is some controversy over whether these measures are accurate proxies for aggression (the TCRTT has some substantial validity issues), they are simple and intuitive. And no matter how you choose to measure aggression, you get the same results. In the lab, playing violent video games leads to an increase in aggressive behavior, thoughts, and emotions. At least, this is the finding from an APA meta-analysis of 31 different studies from 2009 to 2013 (A meta-analysis is essentially a fancy statistical analysis that combines a bunch of different studies together).

I can already hear the furious protests by gamers reading this. As something of a gamer myself, my first reaction was to discount the relevance of these lab studies. After all, even if playing violent video games in a lab makes you squirt more hot sauce in a cup, or have more angry thoughts, what relevance does this have to real life? I could create a study where I just hit someone on the head with a mallet. I’m sure their aggressive thoughts and behavior would increase! Gee, I’m sure they’d make me drink a lot of hot sauce! What I really care about are long-term effects of violent video games. It’s no good to conclude that violent video games momentarily increase aggression for 15 seconds.

That brings us to the second type of research design in the literature: The longitudinal study. In a longitudinal design, you measure violent video game play along with some measure of aggression at one point in time, and then after some months or years have passed, collect the measures again. Then you see whether the amount of violent video games somebody plays at Time 1 predicts the amount of aggression at Time 2.

The disadvantage of this kind of study as opposed to an experimental study is that it’s more difficult to account for reverse causality. For example, what if people that play more violent video games are already more physically aggressive to start with, and are drawn to those games? The causality could be backwards.

Well, in their 2018 meta-analysis of 24 different studies, Prescott et al. account for that potential issue, along with several others. First, they only examine studies that have this longitudinal design, which bypasses the criticism that lab studies only explore short-term effects. Secondly, they only include studies that measure actual physical aggression (through self-report or teacher and parent reports) which bypasses the criticism about the irrelevance of the Hot Sauce Paradigm or the TCRTT. Finally, they only include studies that control for initial physical aggression by measuring aggression at Time 1. This accounts for the potential of reverse causality (more aggressive people could be drawn to more violent video games). And what do they find?

In adolescents (8.9-19.3 yrs), violent video games are positively correlated with physical aggression. This means that you can predict that someone who plays more violent video games will exhibit more physical aggression.This is a robust finding across all the studies they examine and is consistent with the APA meta-analysis. So, it seems the politicians were correct. Video games are murder simulators that teach kids to kill. Society would be better off without video games, and people should pursue more productive activities like reading their Bibles or becoming accountants. Right?

Part 2: Practical Matters

First of all, I’ve provided a very sanitized summary of the literature. Like any research topic there is disagreement between media violence researchers, but for some reason the tone of discourse here is downright nasty. Research Team A will publish a paper, which is answered by a written rebuttal by Team B, and then Team A will accuse Team B of being industry shills, and Team B will retort that Team A is blinded by ideology, and furthermore, their methodology sucks, and Team A will answer that Team B is refusing to face the facts and-

Ugh. To avoid wading into the muck, I chose meta-analyses written by researchers that weren’t at each other’s throats. And honestly at the end of the day, it seems most of the researchers studying violent video games arrive at similar statistics, but they just interpret them differently. How can that be? To answer that, I have to explain the difference between statistical significance and practical significance.

Imagine you’re trying to figure out whether having a tutor can boost your SAT score. So the treatment group gets a tutor, and the control group gets nothing. At the end of the study, you find that the difference is statistically significant. That means that the group with the tutor had a higher SAT score on average. Okay, great! But wait a minute. What if that group’s score only goes up by half a point? That seems pretty irrelevant, right? In this case, we conclude there is no practical significance, even if there’s statistical significance. That is, statistical significance asks whether there’s any difference at all, and practical significance asks whether the difference matters.

So when I say that media violence researchers agree on the statistics, I mean that even the skeptic side finds a statistically significant relationship between violent video games and aggression. This finding is robust across the majority of the research. What the researchers seem to disagree on is the practical significance of this. In other words, yes, playing violent video games seems to have a causal relationship with aggression, but how strong is this relationship? If a kid playing violent video games is 0.05% more likely to hit a peer, does it matter? What if that kid is 40% more likely? Then it would definitely matter, right?

To measure practical significance, researchers use what is called the effect size (it literally measures how big the effect is). Two ways to measure effect size are Pearson’s correlation (also called the r value) and Cohen’s d. What’s important for us to know is that they can be converted from one to the other.

This is good, because the APA meta-analysis reports Cohen’s d and the r value while the Prescott meta-analysis only reports the r value (more precisely, the standardized beta coefficient which can be very roughly approximated as the r value). To my poor old brain, it’s a lot easier to interpret Cohen’s d than the r value, so we’re just going to convert everything to Cohen’s d.

After we do this, let’s look at the effect sizes in the two meta-analyses we’ve covered thus far:

The APA meta-analysis reported that for a composite measure of aggression, they got d = 0.31. For physical aggression alone, they got d = 0.37.

Meanwhile, the Prescott meta-analysis (which only looked at physical aggression), got d values ranging from 0.16 to 0.22 depending on how stringent their controls were (the more stringent the controls, the smaller the number).

Let’s combine these numbers and say that the magnitude of the effect of violent video games on aggression varies from d = 0.16 to d = 0.37.

Because it’s hard to really picture what these numbers mean, I am going to steal nearly verbatim a technique from my favorite blogger, Scott Alexander. What Scott does is translate these effect sizes to domains we’re more familiar with.

For instance, let’s pretend that eating spinach will increase your height. Let’s also assume height has a standard deviation of 3 inches.

Now, if spinach has an effect size of d = 0.16, like the lowest estimate of effect size from the Prescott paper, eating it would make you grow 0.48 inches (because 0.16 * 3 = 0.48).

And if spinach has an effect size of d = 0.37, like the estimate from the APA paper, eating it would make you grow 1.11 inches (0.37 * 3 = 1.11).

Basically, if one group plays violent video games and one group does not, the difference in aggression between the groups is analogous to 0.48-1.11 inches in height. To give you an even better feel for these effect sizes, let’s try translating them to body weight.

Pretend you have a magic weight loss pill. Let’s also assume standard deviation of weights is 27 lb. An effect size of d = 0.16 is like losing 4.16 lb. An effect size of d = 0.37 is like losing 9.62 lb. So again, the difference in aggression between a violent video game group and a non violent video game group is analogous to 4.16-9.62 lb.

Keep in mind, while the APA estimate of 0.37 includes lab studies with proxies for aggression like the Hot Sauce Test, the Prescott estimate of 0.16 is derived exclusively from longitudinal studies measuring real physical aggression. Thus, I feel it’s reasonable to conclude that 0.16 is closer to the true effect size of violent video games on physical aggression over a long period of time.

So, let’s ask again, are these effect sizes big enough to matter? Well, because I am extremely biased in favor of video games, I find the lower Prescott estimate pretty trivial, but I do concede an effect size of 0.37 like the APA estimate is big enough to matter. Still, the average effect size in social psychology is around d = 0.62, making 0.16 and 0.37 seem paltry in comparison. In any case, the fact that researchers still dispute whether these effect sizes are practically significant should put the whole conversation in perspective. When politicians or activists argue that video games are murder simulators, it seems a tad hyperbolic, don’t you think?

Part 3: Murder Simulation

In fact, the politician/activist case gets even weaker. Thus far, all we’ve discussed is whether there’s a connection between violent video games and aggression. What about the link with violent crime? To be honest, there doesn’t seem to be all that much research on this topic. The APA meta-analysis I looked at earlier concluded that there wasn’t enough evidence in the research they looked at to draw any conclusions. However, my Google-fu did turn up two fairly recent papers.

The first study was by Markey et al. in 2015. They ran 4 longitudinal analyses using crime data from the FBI as their dependent variable along with sales of video games, release dates of popular violent video games, and Internet keyword searches for violent video game guides as independent variables. As they show in their paper, video game sales have become more popular in the last 36 years at the same time that homicide and other violent crimes have gone down.

Now, this alone is not proof of much, because it’s possible that any negative effect of video game sales is dwarfed by other factors that have brought crime rates down. The authors are aware of this. Using really complicated modeling techniques, they control for this possibility. After extensive number crunching, they find almost no statistically significant relationships between violent video games and violent crime rates. Notice the “almost” in that sentence. They do find a few statistically significant relationships, and what they find is that violent video game sales are related to decreases in homicides and assaults.

Wait, what? That’s right, folks. They find that when sales or searches go up for violent video games, violent crimes go down. Their effect sizes for this phenomenon are also larger in magnitude than any we’ve looked at previously, with a d hovering around -0.63. Now, one critique of this study I have is that one of their independent variables is video game sales in general, not violent video games specifically. However, their other two variables compensate for this.

And anyway, there is actually another 2016 longitudinal study by Cunningham et al. which does use violent video games sales as their independent variable, and they find the same result. Using even more complex modeling techniques than the previous paper,they find a statistically significant decrease in crime when video game sales are higher (this effect size is very small). The authors here posit that violent video games either act as an outlet for aggressive people, or simply take up their time so they do less crimes.

Ultimately, we’ll have to wait for more research and meta-analyses to come in. But for now, these two studies are solid early indicators that violent video games don’t have much to do with violent crime.

Throughout this post, we’ve explored the existing literature on violent video games, aggression, and violent crime. I think it’s safe to say that there is a causal effect of violent video games on aggression, although whether this effect is big enough to matter remains up for debate. I also think it’s safe to conclude that whatever extreme rhetoric politicians or parents or activists throw around on this topic, the research paints a more nuanced picture. In any case, you should take everything I’ve concluded with a massive grain of salt because I am such a massive video game fanboy. Read the papers I linked, and see if you come to a different conclusion.



