The United States has belatedly awakened to the knowledge that it is, in effect, under armed attack. More than 30 000 people are purposely shot to death each year—more than 300 000 since the World Trade Center was destroyed in 2001. Rates of firearm-related violent crime have increased 26% since 2008.1 Physicians have joined others in demanding a strong response to this crisis. We look to scientific research to provide the evidence on which that response should be based. Such evidence should include a thorough exploration of risk and protective factors and, most importantly, controlled studies showing which interventions work to reduce firearm violence and why.

At a time when guidance is urgently needed, Fleegler and colleagues2 have examined the relationship between firearm laws and firearm-related deaths in the United States. Their state-level ecological study (a design in which the unit of analysis is a population in aggregate, not the individuals in it) correlated the presence or absence of 28 laws arguably related to firearm violence with firearm-related mortality rates. Their main finding is that having more laws on the books is associated with having lower rates of firearm-related homicide and suicide. This would be an important finding—if it were robust and if its meaning were clear.

Ecological studies of association are inherently weak, however; correlation does not imply causation. This fundamental limitation is beyond the power of the authors to redress. And there are additional concerns. The study's list and scoring system for firearm laws were based on information from the Law Center to Prevent Gun Violence (formerly the Legal Community against Violence) and the Brady Center to Prevent Gun Violence, both advocacy organizations. The scorecard has never been validated for research purposes, as the authors acknowledge. It does not account for variations between states in the specifics of their laws and includes no measure of whether or how effectively the states enforce them. The model is additive, making no provision for interactions between laws. The laws are evaluated altogether or in subsets, never individually.

The results also raise concerns. There was no change when the analysis incorporated weights for expected differences in the effect of individual laws. This is difficult to explain, unless the weights were poorly chosen—or the laws have no effect, making the weights irrelevant. Suicide accounted for 94% of the observed decrease in firearm-related mortality (6.25 of 6.64 deaths per 100 000). Intuitively, however, these laws should have their greatest effect on criminal violence; they were almost certainly enacted for that purpose.

When Fleegler et al2 accounted for the prevalence of firearm ownership, the association between firearm laws and firearm fatalities essentially disappeared. Perhaps these laws decrease mortality by decreasing firearm ownership, in which case firearm ownership mediates the association. But perhaps, and more plausibly, these laws are more readily enacted in states where the prevalence of firearm ownership is low—there will be less opposition to them—and firearm ownership confounds the association.

Finally, the study could not evaluate the effects of any flow of firearms from states with fewer firearm laws to states with more, which might diminish or erase effects of the laws themselves. Reduction in movement of firearms from states with “loose” laws to those with “tight” laws helped to explain why the background check requirements of the Brady Handgun Violence Prevention Act had no discernible effect on firearm homicide.3

In the end, Fleegler et al2 provide no firm guidance. Do the laws work, or not? If so, which ones? Should policymakers enact the entire package? Some part? Which part? Frustrated policymakers sometimes ask to hear from 1-armed scientists, to avoid “on the one hand . . . on the other hand” summations of the evidence that end with competing recommendations. Here, there can be no recommendation at all; it is as if the scientists have both hands tied behind their backs.

In fact, that is precisely what has happened—not just to these investigators, who did well with the data available to them, but to firearm violence researchers generally. The disappearance of the Centers for Disease Control and Prevention (CDC) research program in this field in the 1990s has been well documented.4,5 A complementary program at the National Institute of Justice survived longer, thanks to the tenacity of its program officer, but ended after she retired in 2008.

Today, with almost no funding for firearm violence research, there are almost no researchers. Counting all academic disciplines together, no more than a dozen active, experienced investigators in the United States have focused their careers primarily on firearm violence. Only 2 are physicians. Only 1 has evaluated the effectiveness of an assault weapons ban.6

Why did this happen? In the early 1990s, scientists were producing evidence that might have been used to reform the nation's firearm policies. To those whose interests were threatened by such reforms, it made perfect sense to choke off the production of the evidence. This effort was led by Congressman Jay Dickey, self-described “point person for the NRA.”7 It succeeded. When rates of firearm violence were at historic highs and appeared to be increasing, the government abandoned its commitment to understanding the problem and devising evidence-based solutions.

This is not how the United States usually responds to a public health emergency. In the 1960s, the nation recognized a fast-growing crisis related to motor vehicle traffic fatalities. We created an agency, led by internist-epidemiologist William Haddon, MD, to launch an aggressive research effort and recommend and implement evidence-based interventions. The motor vehicle industry waged what the Supreme Court called the “regulatory equivalent of war” against airbags, one of the most important of those interventions.8 On airbags and other matters, the industry lost; the public's health and safety won. The effects of these contrasting approaches are clear (Figure).

Figure. Motor vehicle traffic fatality and firearm-related mortality rates in the United States, 1950-2011. Deaths due to legal intervention are excluded. All data for 1950 through 1980 may be found at the Vital Statistics of the United States website, http://www.cdc.gov/nchs/products/vsus.htm. Data for 1981 through 2010 are available at the CDC WISQARS website (Web-based Injury Statistics Query and Reporting System), http://www.cdc.gov/injury/wisqars/index.html. Firearm mortality data for 2011 may be found at Hoyert DL, Xu J, “Deaths: Preliminary Data for 2011,” Natl Vital Stat Rep, 2012;61:1-65. Motor vehicle mortality data for 2011 are from the National Center for Statistics and Analysis, National Highway Traffic Safety Administration, “2011 Motor Vehicle Crashes: Overview—Traffic Safety Facts Research Note,” Washington, DC: 2012; DOT HS 811 701.

Now, President Obama has directed the CDC to resume firearm violence research. To my knowledge, however, no CDC researcher has done more than occasional work in this field in 15 years. New funding will need approval by Congress, and the House of Representatives may be unsympathetic. The National Institute of Justice issued a solicitation for firearm violence research proposals in February 2013, but it will be a small beginning; the institute plans to fund no more than 3 projects.

To prevent firearm violence, our research effort must be substantial and sustained. Physician engagement in developing that effort is particularly important.9 Some projects must have direct relevance to policy-based and other potential interventions. Others need to deepen our basic understanding of the problem. Better data, and data systems, are needed. Interventions must be evaluated, and those evaluations must help guide further efforts. Until we revitalize firearm violence research, studies using available data will often be the best we have. They are not good enough.

Back to top Article Information

Correspondence: Dr Wintemute, School of Medicine, University of California, Davis, 2315 Stockton Blvd, Sacramento, CA 95817 (gjwintemute@ucdavis.edu).

Published Online: March 6, 2013. doi:10.1001/jamainternmed.2013.1292

Conflict of Interest Disclosures: None reported.