This research builds on three decades of effort to produce national estimates of the amount and rate of force used by law enforcement officers in the United States. Prior efforts to produce national estimates have suffered from poor and inconsistent measurements of force, small and unrepresentative samples, low survey and/or item response rates, and disparate reporting of rates of force. The present study employs data from a nationally representative survey of state and local law enforcement agencies that has a high survey response rate as well as a relatively high rate of reporting uses of force. Using data on arrests for violent offenses and the number of sworn officers to impute missing data on uses of force, we estimate a total of 337,590 use of physical force incidents among State and local law enforcement agencies during 2012 with a 95 percent confidence interval of +/- 10,470 incidents or +/- 3.1 percent. This article reports the extent to which the number and rate of force incidents vary by the type and size of law enforcement agencies. Our findings demonstrate the willingness of a large proportion of law enforcement agencies to voluntarily report the amount of force used by their officers and the relative strengths and weaknesses of the Law Enforcement Management and Administrative Statistics (LEMAS) program to produce nationally representative information about police behavior.

Introduction

The authority to use physical force is one of the most distinguishing and controversial aspects of American policing. While use of force has been a topic of both public and scholarly interest for many years, this interest intensified in the wake of the 2014 deaths of Eric Garner and Michael Brown, and several subsequent controversial fatal police actions. In addition to public protests and a kind of ‘crisis of confidence’ in the police, these events also put the spotlight a long standing problem: the lack of national data about police use of force. When individuals ranging from members of the public to members of Congress asked, “How often does this happen?” the disappointing answer they received was, “We don’t know.” Later that same year, President Obama created a Task Force on 21st Century Policing whose members were charged with examining such issues as how to strengthen public trust and police legitimacy. One of many recommendations of the President’s Task Force was that police department use of force policies should require the collection and reporting of data on all officer-involved shootings to the Federal government. Separately, the Federal Bureau of Investigation (FBI) has since taken some initial steps toward collecting data on fatal uses of force from State and local agencies.

There is an extensive body of research about the amount of force used and the characteristics of the police, the residents, the incidents and the community that are associated with more or less force. For detailed reviews of this literature, see [1–3]. However, most of this research is based on either a single or a small number of jurisdictions or parts of jurisdictions. Another difficulty is that the data sources that are employed to measure force vary widely; studies use individual police reports, surveys of law enforcement agencies, systematic observations of police public contacts, interviews with residents or suspects, surveys of the general population and compilations of media accounts [2]. Our understanding of the amount of force used by American police is limited further because of the lack of a consistent definition or measurement of force from jurisdiction to jurisdiction, from study to study, and over time. For instance, some studies are limited to police shootings [4], deaths resulting from police shootings [5] or incidents where various types of weapons are used [6]. At the other extreme are studies that count shouting, threats of arrest, and other types of language as a use of force [7] and some studies of force using systematic field observations include hundreds of use of force incidents, none of which involve the actual use of a weapon by a police officer [8].

Use of force measurement is also complicated by variation in the units of analysis. Studies can report the number of incidents where one or more types of force are used; the number of types of force used in particular incidents, the number of officers that use force in a particular incident, or some combination of these three. Understanding the amount of force is complicated further by diverse computations of the rate of force. Numerous studies have computed rates of force by dividing the number of force incidents by either the size of the resident population [9], the number of sworn officers [10], arrests [11], police pubic contacts [7], potentially violent encounters [12], or calls for service [13].

Efforts to understand the impact of these diverse methods and measures are hindered by the fact that no study uses more than one source of data, one measure of force, or one rate of force; therefore it is difficult to try to calibrate across studies the impact of different data sources, measures or rate computations. Under these conditions, it is not surprising there is a great deal of variety in the available reports with regard to what constitutes force and exactly how much force is used by American police agencies. Among 36 studies recently reviewed [2], the smallest average rate of force reported– 0.1 percent of calls for service [14]–is three hundred times smaller than the highest reported rate of force: 30.0 percent of suspect encounters [15]. The wide range of reported rates of force suggests that there are wildly different understandings of what does and does not constitute “force” and that there is a substantial amount of imprecision in how force is measured and rates of force are computed.

National estimates of police use of force For more than 30 years, criminologists have regularly complained about to the absence of comprehensive, accurate, and timely national-level data on police use of lethal force [1,16,17], with one going so far as to lament that journalists did a better job reporting such events than criminologists or the Federal government [18]. The national controversy over the number of Black residents killed by the police find little agreement between protestors [19] and high level law enforcement officials [20] except for the need for accurate and up-to-date national data on the number of homicides by the police. However, even if current efforts using open sources [21,22] or planned efforts for surveying law enforcement agencies [23] were to be successful, those efforts alone will tell us virtually nothing about the nature and extent of the far larger (and largely unknown) amount of force used by police officers which do not result in deaths or even serious injury. As the widely publicized incident in Baltimore involving Freddy Gray demonstrated [24], the difference between incidents of lethal and non-lethal force can easily reflect the behavior of medical transport services and the proximity of high quality trauma centers and not necessarily the behavior of residents or sworn police officers [25,26]. At the present time, there are two sources of data that have been used to produce national estimates of the amount and rate of force used by the police in the United States. The first source is the Police Public Contact Survey (PPCS) conducted every three years between 2002 and 2011 by the Bureau of Justice Statistics (BJS) as a supplement to the National Crime Victimization Survey (NCVS), a nationally representative sample of households [7,27–29]. The second type of data on uses of force comes from government funded but privately implemented sample surveys of law enforcement organizations [9,10,14,30]. While surveys of law enforcement organizations can capture all types of force used regardless of whether the force used results in injury or death, by the nature of its design, the PPCS cannot capture force incidents that result in death. Both of these approaches—the surveys of residents and surveys of law enforcement agencies—have methodological strengths and weaknesses; however, because of design limitations and implementation difficulties, neither of these approaches has yet produced reliable national estimates of the amount of force, the rate of force, or the correlates of force.

Police public contact survey For each of the four waves of the PPCS, the intended sample was all English-speaking persons over 15 years of age that responded to the NCVS. Designed and funded by BJS, the NCVS is implemented on a continuing basis by the U.S. Bureau of the Census. After responding to questions about their crime victimizations during the past 6 months as part of the NCVS, a sub-sample of individuals are asked to complete a supplemental interview about their face-to-face contacts with the police during the past 12 months. Among those individuals reporting face to face contact with the police, the PPCS asks, among other things, if the police used force, what type of force, and whether the respondent was arrested. While the design of the PPCS program is to measure how often the public has contact with the police, the actual implementation of this survey varied from the design in several ways. For instance, in 2011, the NCVS survey was completed by 62,280 (88.0 percent) of the intended nationally-representative sample of 70,773 individuals (the NCVS is a household survey; the figures used here rely on BJS counts of individuals) but the 2011 national estimates of force were produced based on responses from 41,408 individuals; this is 66.5 percent of the NCVS respondents and 58.5 percent of the originally selected nationally representative sample (Table 1). BJS reports that 18 percent of the intended NCVS sample were excluded because individuals did not speak English, refused to complete the survey, were non-interviews, or were included in the NCVS only by proxy—another person in their family reported their victimization experiences. In addition, for methodological purposes, the 2011 design of the PPCS called for 15 percent of the available sample to use the 2008 survey instrument [7]. In all four waves of the PPCS, BJS publications used the relationship between the use of force and the age, race and sex of the respondents in the survey to estimate the amount of force experienced by the intended respondents who were not surveyed and then used the sampling probabilities of the entire survey to produce a national estimate for the amount and rate of force. Thus, in 2011, the responses from 58.5 percent of the nationally representative NCVS sample were used to produce national level estimates of force. PPT PowerPoint slide

PowerPoint slide PNG larger image

larger image TIFF original image Download: Table 1. Police-public contact survey (PPCS) intended and actual interviews, 2011. https://doi.org/10.1371/journal.pone.0192932.t001 In another set of exclusions to the PPCS sample, a number of other individuals whose residences were eligible to be included in the NCVS during the time period covered by the PPCS but were in hospitals, mental institutions, halfway houses, jails or prisons at the time the PPCS survey was conducted were not included NCVS or PPCS samples. The actual size of all these exclusions are unknown and are not incorporated into PPCS estimates of force (however, it has been estimated that including recently jailed inmates would increase the PPCS estimates of force by 17 percent [2]). Thus, the PPCS’s sample may not well represent the population of all residents who come into contact with the police, have force used against them or are arrested, including individuals living in circumstances which make them more likely to have such interactions with the police than individuals of a similar age, race or sex. In addition to the survey response rates, a second limitation of the PPCS is the many and varied ways that it has measured police public contacts and uses of force over time. For instance, the four BJS published reports from the PPCS have each reported force differently. Four types of force were reported for the 2002 PPCS–pushed, kicked, pointed a gun and other. For the 2005 PPCS, the use of a chemical agent was reported. For the 2008 PPCS, the use of electrical weapons and shouting/cursing were added. Lastly, for the 2011 PPCS, handcuffing was added as a use of force. For none of the PPCS waves is the discharge of a firearm reported. Beyond using a variety of force types over the years, the PPCS also changed the format of the force question. In the first three waves, the PPCS first determined if the respondent had a face to face contact with a police officer in the past year. If they did, the respondents were then asked a series of questions about the nature of the contact, including whether the police used force against them. If the respondent says “yes” to the initial use of force question, they were then asked about what type of force was used. In 2011, the PPCS survey was changed in two major ways. First, each respondent was asked the same general question about experiencing force used in prior surveys as well as nine specific questions about particular types of force. The second change involved asking questions about force only of respondents who were 1) stopped by the police on the street or 2) stopped while driving a car. Based on these methods, the PPCS reported that force was used or threatened against less than one third of one percent of residents in 2002 and 2005. In 2008, the rate dropped to about one-fourth of one percent (Table 2). Based on these rates, the estimated number of uses of force was 664,280 for 2002, 707,522 for 2005 and 574,070 for 2008. The reports from the first three waves of the PPCS also distinguish between three types of behavior: 1) physical force, 2) verbal threats, and 3) shouting and cursing. Physical force, described as any physical contact including pushing, hitting, kicking, and weapon use, constituted about 55 percent of these three behaviors. Thus, the rates of physical force for those years are about half of what is reported for all types of force—less than one sixth of one percent of U.S. residents—which is among the lowest rates of force reported in existing studies of police use of force [2]. PPT PowerPoint slide

PowerPoint slide PNG larger image

larger image TIFF original image Download: Table 2. Police-public contact survey (PPCS) contacts, arrests, use of force, 2002–2011. https://doi.org/10.1371/journal.pone.0192932.t002 The BJS reports for the 2002, 2005 and 2008 PPCS provide the rate and number of uses of force separately for drivers in traffic stops. For all drivers, the estimated number of incidents of any type of force in those years was 188,822, 142,919, and 160,000 and the rates of any type of force for drivers were less than one tenth of one percent. The 2011 PPCS produced an estimate of 1,610,565 incidents, a count ten times larger than the average number of incidents from the prior three waves of the PPCS. This difference is probably the result of new screening questions about contacts with the police and about uses of force. Drivers in a traffic stop are the only group for which a measure of force is reported in both the 2008 and the 2011 reports and the reported rates of force per driver increased almost 10 times from 0.08 percent [29] to 0.76 percent in 2011 [7]. There are two additional problems with using the PPCS to measure force. First, the triennial BJS reports [7,27–29] only use the most recent incident where a contact with the police involved force; additional contacts where a use of force is reported are not counted. The second problem not addressed by the 2011 revisions is the inconsistency in the PPCS and other national estimates of arrests, traffic stops and motor vehicle accidents. In the 2002, 2005 and 2008 PPCS surveys, the estimated number of drivers arrested ranges from 427,803 to 459,238 (See Table 2). In the 2011 survey, the PPCS estimated that there were only 264,042 drivers arrested [7]. In addition, during 2002 –the only year that estimates of arrests for all residents are reported in the PPCS–the estimated number of arrests for all residents was 1.3 million, which is about 10 percent of the FBI national estimates of 12 million arrests during 2011 [31]. Using the PPCS to estimate uses of force is problematic due to sampling issues, question revisions, and the inconsistent definition and measurement of force and arrest used across the various versions of the survey. The exclusion of non-English speakers, jailed offenders, and other persons not covered in the NCVS with proxy interviews skews the representativeness of the PPCS sample in ways not addressed by the statistical weights. The measurement of police public contacts and subsequent arrests vary substantially across waves of the PPCS further limiting use of the four waves of the PPCS to produce rates of force per police contact or per arrest. Lastly, just as victimization surveys cannot measure homicide, the PPCS cannot measure police use of lethal force.

Measuring force with administrative surveys An alternative approach used to measuring police use of force is to survey law enforcement agencies. There are four independent research efforts that have attempted to capture the existing information in law enforcement records to estimate the amount and rate of force using surveys of law enforcement agencies in the U.S. [9,10,14,30]. While all of these studies surveyed State and local general purpose law enforcement agencies, they varied greatly in the size and nature of their sample, the rate at which agencies responded to the survey and the rate at which responding agencies responded to questions about the amount of force (Table 3). These four surveys also varied in how force and rates of force were defined and measured. PPT PowerPoint slide

PowerPoint slide PNG larger image

larger image TIFF original image Download: Table 3. National surveys of law enforcement agencies about police use of force. https://doi.org/10.1371/journal.pone.0192932.t003 In 1992, the Police Foundation surveyed U.S. state and local law enforcement agencies to collect information about their use of force policies and practices [10,32]. A sample of 1,697 state and local police agencies and sheriff’s offices were asked if they mandated reporting for 18 different types of police behavior ranging from firearm discharges to handcuffing. Among the 1,111 responding agencies, all state agencies and about 95 percent of sheriffs and local police agencies mandated reporting the number of individuals shot and shot at. Mandatory reporting for incidents that involved the use of other weapons ranged from 93.8 percent for firearm discharges to 70.2 percent for chemical agents; the rate of mandated reporting varied from 66.6 percent to 19.2 percent for incidents where the police made physical contact without using any weapons [10]. Pate and Fridell reported the rate of force per 1,000 sworn officers for each of the 18 types of force, separately for sheriffs, county police agencies, city police agencies and State agencies. For instance, based on the 557 responding municipal police departments, Pate and Fridell reported there were 4.1 firearm discharges for every 1,000 sworn officers; among the 409 city agencies that reported the use of weaponless tactics, there were 272 uses of weaponless tactics for every 1,000 sworn officers. Thus, they report rates per force type per officer for 18 different but overlapping samples of agencies. In addition, a force incident could involve more than one type of force. Given these limitations, Pate and Fridell did not report rates of force for all types of agencies nor did they summarize the amount or rate of force across all 18 types of police behavior. In 1997, 571 agencies responded to a survey sent by the Police Executive Research Forum (PERF) [9] to a total of 832 sheriff’s departments and municipal police agencies in U.S. jurisdictions over 50,000 in population. Among the 571 responding agencies, 265 of them provided information on the number of times their officers used physical force, a chemical agent or any other weapon to control a suspect during 1996. For the 265 agencies with data on force, the data were combined on all three types of force with a resulting median rate of force equal to 76 incidents per 100,000 residents. It was also reported that across agencies their rate of force ranged from .24 to 868 incidents per 100,000 residents [9]. In a multivariate model, it was reported that there were lower rates of force in Northeastern states and in jurisdictions with lower violent crime rates, and no differences in the rates of force for agencies that were accredited or agencies that had employee unions [9]. In 2001, the International Association of Chiefs of Police (IACP) presented the final report of a project that measured five types of police uses of force–physical, chemical, electronic, impact and firearm–over a seven year period from 1994 through 2000 [14]. This project relied on the voluntary submission of reports to the IACP from U.S. law enforcement agencies. Over the life of the project, there were a total of 564 annual submissions including 177,000 incidents of force, of which more than 80 percent involved the use of weaponless tactics. There were 112 agencies participating in 1995 and 228 in 2000. In the other five years, fewer than 100 agencies contributed data on incidents of force. Based on data from 1999 (the last year for which complete data were available) the IACP reported that police used force at a rate of 3.61 times per 10,000 calls for service to police agencies. In 2005, researchers associated with PERF [30] developed a survey that asked questions about each agency’s use of 10 types of police behavior during 2003, 2004 and 2005, and sent it to a nationally representative sample of 950 law enforcement agencies. A total of 516 agencies responded to the 2005 survey. In 2009, another survey was sent to the 516 agencies asking about the same 10 items; 327 of the 516 agencies provided information about their uses of force in 2006, 2007 and 2008. The data were used to report the average rate of force per agency for each of the ten types of force for 2005 through 2008. For instance, the lowest rate of force was for the number of civilians shot and killed per law enforcement agency, which was 0.05 in 2005, 2007 and 2008. In 2006, the rate for this type of force was 0.06. On the other hand, the most frequently reported type of force–empty hand tactics–ranged from 18.56 per agency in 2005 to less than 12.00 for the other three years. Similar to [9,10], the responses were weighted to conform to the characteristics of their original sample, but they did not report an estimate for the amount or rate of all uses of force for the United States. These four agency surveys are fairly consistent in the types of police behavior that they include in their measure of force (S1 Table); however, two of them provide a more detailed listing [10,30]. Virtually all of the types of police behavior in all four studies involve the use of physical force; the one exception is pointing or unholstering a firearm [10,30]. There is less agreement among these four studies on whether force is measured by force type, by incident or by officer report. In Pate & Fridell’s study [10] the unit of measurement is the type of force not the force incident. Taylor, et al. [30] explicitly addressed this issue by requesting that the agencies specify whether their counts of force are based on incidents or on separate reports from all the officers on the scene. They found that most agencies can report force counts by incident or by officer reports; 35 percent of the agencies could only report incidents and 13 percent could only report officer counts. The remaining 52 percent could produce both, and so they reported separate analyses using incident counts and officer counts [30]. Alpert and MacDonald [9] and IACP [14] did not specify which units of measurement were used but they combined counts across types of force, which suggests they had incident level data.