PREVENTING CRIME:

WHAT WORKS, WHAT DOESN'T,

WHAT'S PROMISING1

A REPORT TO THE UNITED STATES CONGRESS

Prepared for the National Institute of Justice

by

Lawrence W. Sherman

Denise Gottfredson

Doris MacKenzie

John Eck

Peter Reuter

Shawn Bushway

in collaboration with members of the Graduate Program

Department of Criminology and Criminal Justice

University of Maryland

Scientific Advisers

Ronald V. Clarke

Dean and Professor

School of Criminal Justice

Rutgers University

Phillip Cook

Professor of Public Policy

Duke University

David Farrington

Professor of Psychological Criminology

Cambridge University

Carol Kumpfer

Associate Professor of Health Education

University of Utah

Joan Petersilia

Professor of Criminology, Law and Society

University of California, Irvine

Michael Tonry

Sonofsky Professor of Law

University of Minnesota

Roger Weissberg

Professor of Psychology

University of Illinois at Chicago

Charles Wellford

Professor of Criminology and Criminal Justice

University of Maryland, College Park

Partial List of Collaborating Graduate Students

Todd Armstrong, M.A.

Katherine Culotta

Laurie Alphonse, M.A.

Cynthia Lum, M.A.

Jennifer Borus

Jeffrey Bouffard, M.A.

Lynn Exum, M.A.

Veronica Puryear

John Ridgely

Stacy Skobran, M.A.

Shannon Womer

Richard Lewis, M.A.

Christine Depies

Shawn J. Anderies

Mohammed Bin Kashem, M.A.

Julie Kiernan

Aimee C. Kim

Daniel R. Lee, M.A.

Patti A. Mattson

Jennifer R. Smith

David A. Soule

Stephanie L. Weiner

NOTES

1This report was supported by National Institute of Justice Grant Number 96MUMU0019 to the University of Maryland at College Park. Points of view or opinions stated herein are those of the authors and do not necessarily represent the official views of the United States Department of Justice.

Table of Contents

Overview

1. Introduction: The Congressional Mandate to Evaluate

Lawrence W. Sherman

2. Thinking About Crime Prevention

Lawrence W. Sherman

3. Communities and Crime Prevention

Lawrence W. Sherman

4. Family-Based Crime Prevention

Lawrence W. Sherman

5. School-Based Crime Prevention

Denise Gottfredson

6. Labor Markets and Crime Risk Factors

Shawn Bushway and Peter Reuter

7. Preventing Crime at Places

John Eck

8. Policing for Crime Prevention

Lawrence W. Sherman

9. Criminal Justice and Crime Prevention

Doris L. MacKenzie

10. Conclusions: The Effectiveness of Local Crime Prevention Funding

Lawrence W. Sherman

Appendix: Methodology for this Report

Lawrence W. Sherman and Denise Gottfredson

PREVENTING CRIME: AN OVERVIEW

by Lawrence W. Sherman

Mandate. In 1996 Congress required the Attorney General to provide a "comprehensive evaluation of the effectiveness" of over $3 Billion annually in Department of Justice grants to assist State and local law enforcement and communities in preventing crime. Congress required that the research for the evaluation be "independent in nature," and "employ rigorous and scientifically recognized standards and methodologies." It also called for the evaluation to give special emphasis to "factors that relate to juvenile crime and the effect of these programs on youth violence," including "risk factors in the community, schools, and family environments that contribute to juvenile violence." The Assistant Attorney General for the Office of Justice Programs asked the National Institute of Justice to commission an independent review of the relevant scientific literature, which exceeds 500 program impact evaluations.

Primary Conclusion. This Report found that some prevention programs work, some do not, some are promising, and some have not been tested adequately. Given the evidence of promising and effective programs, the Report finds that the effectiveness of Department of Justice funding depends heavily on whether it is directed to the urban neighborhoods where youth violence is highly concentrated. Substantial reductions in national rates of serious crime can only be achieved by prevention in areas of concentrated poverty, where the majority of all homicides in the nation occur, and where homicide rates are 20 times the national average.

Primary Recommendation. Because the specific methods for preventing crime in areas of concentrated poverty are not well-developed and tested, the Congress can make most effective use of DOJ local assistance funding by providing better guidance about what works. A much larger part of the national crime prevention portfolio must be invested in rigorous testing of innovative programs, in order to identify the active ingredients of locally successful programs that can be recommended for adoption in similar high-crime urban settings nation-wide.

SECONDARY CONCLUSIONS . The Report also reaches several secondary conclusions:

o Institutional Settings. Most crime prevention results from informal and formal practices and programs located in seven institutional settings. These institutions appear to be "interdependent" at the local level, in that events in one of these institution can affect events in others that in turn can affect the local crime rate. These are the seven institutions identified in Chapter Two:

* Communities

* Families

* Schools

* Labor Markets

* Places (specific premises)

* Police

* Criminal Justice

o Effective Crime Prevention in High-Violence Neighborhoods May Require Interventions in Many Local Institutions Simultaneously. The interdependency of these local institutions suggests a great need for rigorous testing of programs that simultaneously invest in communities, families, schools, labor markets, place security, police and criminal justice. Operation Weed and Seed provides the best current example of that approach, but receives a tiny fraction of DOJ funding.

o Crime Prevention Defined. Crime prevention is defined not by intentions or methods, but by results. There is scientific evidence, for example, that both schools and prisons can help prevent crime. Crime prevention programs are neither "hard" nor "soft" by definition; the central question is whether any program or institutional practice results in fewer criminal events than would otherwise occur. Chapter Two presents this analysis.

o The Effectiveness of Federal Funding Programs. The likely impact of federal funding on crime and its risk factors, especially youth violence, can only be assessed using scientifically recognized standards in the context of what is known about each of the seven institutions. Chapter One presents the scientific basis for this conclusion. Each of the chapters on the seven institutional settings concludes with an analysis of the implications of the scientific findings for the likely effectiveness of the Department of Justice Programs.

o What Works in Each Institution. The available evidence does support some conclusions about what works, what doesn't, and what's promising in each of the seven institutional settings for crime prevention. These conclusions are reported at the end of each of Chapters 3-9. In order to reach these conclusions, however, the Report uses a relatively low threshold of the strength of scientific evidence. This threshold is far lower than ideal for informing Congressional decisions about billions of dollars in annual appropriations, and reflect the limitations of the available evidence.

o Stronger Evaluations. The number and strength of available evaluations is insufficient for providing adequate guidance to the national effort to reduce serious crime. This knowledge gap can only be filled by Congressional restructuring of the DOJ programs to provide adequate scientific controls for careful testing of program effectiveness. DOJ officials currently lack the authority and funding for strong evaluations of efforts to reduce serious violence.

o Statutory Evaluation Plan. In order to provide the Department of Justice with the necessary scientific tools for program evaluations, the statutory plan for evaluating crime prevention requires substantial revision. Scientifically recognized standards for program evaluations require strong controls over the allocation of program funding, in close coordination with the collection of relevant data on the content and outcomes of the programs. The current statutory plan does not permit the necessary level of either scientific controls on program operations or coordination with data collection. Funds available for data collection have also been grossly inadequate in relation to scientific standards for measurement of program impact.

Chapter Ten presents a statutory plan for accomplishing the Congressional mandate to evaluate with these elements:

1. Earmark ten percent of all DOJ funding of local assistance for crime prevention (as defined in this Report) for operational program funds to be controlled by a central evaluation office within OJP.

2. Authorize the central evaluation office to distribute the ten percent "evaluated program" funds on the sole criteria of producing rigorous scientific impact evaluations, the results of which can be generalized to other locations nationwide. Allocating these funds for field testing purposes simply adds to the total funding for which any local jurisdiction is eligible. Thus the "evaluated program" funding becomes an additional incentive to cooperate with the scientific evaluation plan on a totally voluntary basis.

3. Set aside an additional ten percent of all DOJ funding of local assistance for crime prevention to support the conduct of scientific evaluations by the central evaluation office. This recommendation makes clear the true expense of using rigorous scientific methods to evaluate program impact. Victimization interviews, offender self-reported offending, systematic observation of high crime locations, observations of citizen-police interaction, and other methods can all cost as much or more than the program being evaluated.

DEPARTMENT OF JUSTICE FUNDING FOR LOCAL CRIME PREVENTION

Chapter One describes the basic structure and mechanisms for Department of Justice FY 1996 funding of State and local governments and communities for assistance in crime prevention. The two major categories are $1.4 billion in funding of local police by the Office for Community-Oriented Policing Services (COPS), and $1.8 billion in local crime prevention assistance funding of a wide range of institutions by the Office for Justice Programs (OJP).1 This review examines both the relatively small funding for discretionary grants by DOJ, many of which are determined by Congressional "earmarks" to particular grantees and programs, and formula grants, which are distributed to State or local governments based on statutory criteria such as population size or violent crimes.

These are the principal OJP offices administering both types of grants: the Bureau of Justice Assistance administers the $503 million Local Law Enforcement Block Grants, the $475 million Byrne Formula Grants, and the $32 Million in Byrne Discretionary Grants; the Office of Juvenile Justice and Delinquency Prevention administers the $70 Million Juvenile Justice Formula Grants, and the $69 Million Competitive Grants; the Violence Against Women Grants Office administers the $130 Million STOP Violence Against Women Formula Grants and $28 Million in Discretionary Grants To Encourage Arrests; Corrections Program Office administers a $405 Million Formula Grants for prison construction and a $27 Million Grants Program for substance abuse treatment of prison inmates; the Drug Courts Program Office funds $15 Million (from LLEBG) to local drug courts. The Executive Office of Weed and Seed administers the $28 Million (from Byrne) Federal component of the Weed and Seed Program in selected high-crime inner-city areas.

SCIENTIFIC STANDARDS FOR PROGRAM EVALUATIONS

The Omnibus Crime Control and Safe Streets Act of 1968 defines an "evaluation" as "the administration and conduct of studies and analyses to determine the impact and value of a project or program in accomplishing the statutory objectives of this chapter."2 By this definition, an evaluation cannot be only a description of the implementation process, or "monitoring" or "auditing" the expenditure of the funds. Such studies can be very useful for many purposes, including learning how to implement programs. But they cannot show whether a program has succeeded in causing less crime, and if so by what magnitude. Nor can the results be easily generalized.

The scientific standards for inferring causation have been clearly established and have been used in other Reports to the Congress to evaluate the strength of evidence included in each program evaluation. With some variations in each setting, the authors of the present Report use an adapted version of scoring system employed in the 1995 National Structured Evaluation by the Center for Substance Abuse Prevention. The system is used to rate available evaluations on a "scientific methods score" of 1 through 5. The scores generally reflect the level of confidence we can place in the evaluation's conclusions about cause and effect. Chapter Two describes the specific procedures followed in the application of this 1-5 rating system, as well as its limitations.

Deciding What Works

The scientific methods scores reflect only the strength of evidence about program effects on crime, and not the strength of the effects themselves. Due to the general weakness of the available evidence, the Report does not employ a standard method of rating programs according to the magnitude of their effect size. It focuses on the prior question of whether there is reasonable certainty that a program has any beneficial effect at all in preventing crime. The limitations of the available evidence for making this classification are discussed in Chapter Two. We note these limitations as we respond to the mandate for this Report and classify major local crime prevention practices in each institutional setting as follows:

What Works. These are programs that we are reasonably certain prevent crime or reduce risk factors for crime in the kinds of social contexts in which they have been evaluated, and for which the findings should be generalizable to similar settings in other places and times. Programs coded as "working" by this definition must have at least two level 3 evaluations with statistical significance tests and the preponderance of all available evidence showing effectiveness.

What Doesn't Work. These are programs that we are reasonably certain fail to prevent crime or reduce risk factors for crime, using the identical scientific criteria used for deciding what works.

What's Promising. These are programs for which the level of certainty from available evidence is too low to support generalizable conclusions, but for which there is some empirical basis for predicting that further research could support such conclusions. Programs are coded as "promising" if they found effective in at least one level 3 evaluation and the preponderance of the evidence.

What's Unknown. Any program not classified in one of the three above categories is defined as having unknown effects.

EFFECTIVENESS OF LOCAL CRIME PREVENTION PRACTICES

The scientific evidence reviewed focuses on the local crime prevention practices that are supported by both federal and local, public and private resources. Conclusions about the scientifically tested effectiveness of these practices are organized by the seven local institutional settings in which these practices operate.

Chapter 3: Community-Based Crime Prevention reviews evaluations of such practices as community organizing and mobilization against crime, gang violence prevention, community-based mentoring, and after-school recreation programs.

Chapter 4: Family-Based Crime Prevention reviews evaluations of such practices as home visitation of families with infants, preschool education programs involving parents, parent training for managing troublesome children, and programs for preventing family violence, including battered women's shelters and criminal justice programs.

Chapter 5: School-Based Prevention reviews evaluations of such practices as DARE, peer-group counseling, gang resistance education, anti-bullying campaigns, law-related education, and programs to improve school discipline and improve social problem-solving skills.

Chapter 6: Labor Markets and Crime Risk Factors reviews evaluations of the crime prevention effects of training and placement programs for unemployed people, including Job Corps, vocational training for prison inmates, diversion from court to employment placements, and transportation of inner-city residents to suburban jobs.

Chapter 7: Preventing Crime At Places reviews the available evidence on the effectiveness of practices to block opportunities for crime at specific locations like stores, apartment buildings and parking lots, including such measures as cameras, lighting, guards and alarms.

Chapter 8: Policing For Crime Prevention reviews evaluations of such police practices as directed patrol in crime hot spots, rapid response time, foot patrol, neighborhood watch, drug raids, and domestic violence crackdowns.

Chapter 9: Criminal Justice and Crime Prevention reviews the evidence on such practices as prisoner rehabilitation, mandatory drug treatment for convicts, boot camps, shock incarceration, intensively supervised parole and probation, home confinement and electronic monitoring.

EFFECTIVENESS OF DEPARTMENT OF JUSTICE FUNDING PROGRAMS

DOJ funding supports a wide range of practices in all seven institutional settings, although much more so in some than in others. Congress has invested DOJ funding most heavily in police and prisons, with very little support for the other institutions. The empirical and theoretical evidence shows that other settings for crime prevention are also important, especially in the small number of urban neighborhoods with high rates of youth violence. Thus the statutory allocation of investments in the crime prevention "portfolio" is lop-sided, and may be missing out on some major dividends.

The effectiveness of existing DOJ funding mechanisms is assessed at the end of each chapter on local crime prevention practices. The following list of major funding programs provides an index to the Chapters in which specific practices funded by each of them is discussed:

Community Policing: Chapters 8 and 10.

Local Law Enforcement Block Grant Program: Chapters 3, 7, 8 and 10.

Byrne Memorial Formula & Discretionary Grants Program: Chapters 3, 4, 5, 6, 8, 10.

Juvenile Justice Formula and Competitive Programs: Chapters 3, 4, 5, 8, 9 and 10.

Operation Weed and Seed: Chapters 3, 4, 8 and 10.

STOP Violence Against Women Grants: Chapters 3, 8, and 10.

Grants to Encourage Arrest Policies: Chapters 3, 8 and 10.

Violent Offender Prison Construction: Chapters 9 and 10.

Drug Courts Competitive Grants: Chapters 9 and 10.

CONCLUSION

The great strength of federal funding of local crime prevention is the innovative strategies it can prompt in cities like New York, Boston, and Kansas City (MO) where substantial reductions have recently occurred in homicide and youth violence. The current limitation of that funding, however, is that it does not allow the nation to learn why some innovations work, exactly what was done, and how they can be successfully adapted in other cities. In short, the current statutory plan does not allow DOJ to provide effective guidance to the nation about what works to prevent crime.

Yet despite the current limitations, DOJ has clearly demonstrated the contribution it can make by increasing such knowledge. The Department has already provided far better guidance to State and local governments on the effectiveness of all local crime prevention efforts than was available even a decade ago. Based on the record to date, only DOJ agencies, and not the State and local governments, have the available resources and expertise to produce the kind of generalizable conclusions Congress asked for in this report. The statutory plan this report recommends would enhance that role, and allow DOJ to accomplish the longstanding Congressional mandate to find generally effective programs to combat serious youth violence. By focusing that effort in the concentrated poverty areas where most serious crime occurs, the Congress may enable DOJ to reverse the epidemic of violent crime that has plagued the nation for three decades.

NOTES

1Total FY 1996 funding for the Office of Justice Programs was $2.7 billion, including $228 Million in collections for the Office for Victims of Crime.

242 U.S.C. Section 3791 (10)

Chapter One

INTRODUCTION: THE CONGRESSIONAL MANDATE TO EVALUATE

by Lawrence W. Sherman

For over three decades, the federal government has provided assistance for local crime prevention. Most of that assistance has been used to fund operational services, such as extra police patrols. A small part of that assistance has been used to evaluate operational services, to learn what works--and what doesn't--to prevent crime. Most of the operational funding to prevent crime, both federal and local, remains unevaluated by scientific methods (Blumstein et al 1978; Reiss and Roth, 1993).

The Congress has repeatedly stated its commitment to evaluating crime prevention programs. In the early years of local assistance under the Omnibus Crime Control and Safe Streets Act of 1968, it was "probably the most evaluation-conscious of all the social programs initiated in the 1960s and 1970s" (Feeley and Sarat, 1980: 130). In 1972, the Congress amended the Act to require evaluations of the "demonstrable results" of local assistance grants. In 1988, the Congress generally limited federal assistance under the Anti-Drug Abuse Act Byrne Grants to programs or projects of "proven effectiveness" or a "record of success" as determined by evaluations.1 But then as now, the Congressional mandate to evaluate remains unfulfilled, for reasons of funding structure and levels inherent in local assistance legislation for three decades.2

This report responds to the latest in the long line of Congressional initiatives to insure that its local assistance funding is effective at preventing crime. It is a state-of-the-science report on what is known--and what is not--about the effectiveness of local crime prevention programs and practices. What is known helps to address the Congressional request for a scientific assessment of local programs funded by federal assistance. What is not known helps to address the underlying issue of the Congressional mandate to evaluate crime prevention, the statutory reasons why that mandate remains unfulfilled, and the scientific basis for a statutory plan to fulfil the mandate.

The report finds substantial advances in achieving the Congressional mandate in recent years. The scientific strength of the best evaluations has improved. The Department of Justice is making far greater use of evaluation results in planning and designing programs. Within the scope of severely constraining statutory limitations, the level of resources the Department of Justice has given to evaluation has increased. The 1994 Crime Act already contains piecemeal but useful precedents for a more comprehensive statutory plan to fulfil the mandate. By asking for this report, the Congress has opened the door for a major step forward in using the science of program evaluation better to prevent crime. That step is a clearer definition of what "effectiveness" means, and a clearer plan for using impact evaluations to measure effectiveness.

THE MANDATE FOR THIS REPORT

In the 104th United States Congress, the Senate approved a major new approach to local assistance program evaluation. The Senate bill would have required the Attorney General to "reserve not less than two percent, but not more than three percent of the funds appropriated" for several local assistance programs to "conduct a comprehensive evaluation of the effectiveness of those programs." This would have been the first statutory plan to adopt the principle of setting aside a certain percentage of DOJ's operational funds exclusively for program evaluation--a principle often endorsed by the same operational leaders from whose funds would be affected,3 and one which has been adopted for other federal agencies.

The House version of the Justice Department's Appropriations bill did not include the evaluation set-aside plan, so a Conference Committee of the two chambers reached an agreement on this point. Rather than funding evaluations of the three specific programs named in the Senate version, the Conference Committee called for a comprehensive evaluation of the effectiveness of all Justice Department funding of local assistance for crime prevention. The Committee also required that the review be completed within nine months after the enactment of the legislation.

On April 27, 1996, the 104th United States Congress enacted the Conference Report (See Exhibit 1) requiring the Attorney General to provide an independent, comprehensive and scientific evaluation of the "diverse group of programs funded by the Department of Justice to assist State and local law enforcement and communities in preventing crime."4 The evaluation was required to focus on the effectiveness of these programs, defined in three ways:

o preventing crime, with special emphasis on youth violence

o reducing risk factors for juvenile violence, including those found in

-community environments

-schools

-families

o increasing protective factors against crime and delinquency

The legislation specifically required that the evaluation employ "rigorous and scientifically recognized standards and methodologies." In order to accomplish this task, the Assistant Attorney General for the Office of Justice Programs directed the National Institute of Justice (NIJ), in coordination with the Bureau of Justice Assistance (BJA), the Office of Juvenile Justice and Delinquency Prevention (OJJDP), and the Executive Office of Weed and Seed, to issue a competitive solicitation for proposals. On June 26, 1996, the National Institute of Justice released a solicitation that began the process of building the framework for this report to achieve the mandate of the 1996 legislation.

Exhibit 1

FRAMEWORK FOR THIS REPORT

This chapter presents the broad rationale for the framework used in this report. It begins with the scientific issues in the choice of the framework, and clarifies what the report is not. It sets the stage for the review with a brief introduction to the scope and structure of federal funding of local crime prevention programs. It then returns to the basic challenge of fulfilling the mandate to evaluate as an integral part of responding to the Congressional request for this report. The detailed plan for the rest of the report is then presented in Chapter Two.

Scientific Issues in The Choice of Framework

The 1996 legislation featured four key factors guiding the choice of methods for accomplishing the evaluation mandate: its breadth, its timing, its scientific standards, and its independence. The Justice Department programs in question cover a broad and complex array of activities. The short time period for producing the report ruled out any new evaluations of crime prevention effectiveness. Thus the requirement to employ scientific methods clearly implied a synthesis of already completed scientific studies.

The reliance on existing rather than new evaluations is clearly reflected in the NIJ solicitation, which called for "an evaluation review of the effectiveness of broad crime prevention strategies and types of programmatic activity..[including] family, school, and community-based strategies and approaches, as well as law-enforcement strategies." The solicitation defined more specifically how the evaluation was to be conducted:

It is expected that this evaluation will not conduct new studies or engage in any detailed analysis of existing data. Rather, the evaluation review and report should draw upon existing research and evaluation studies and comprehensive syntheses of this work to produce a critical assessment of the state of knowledge, including its generalizability and its potential for replication....Also, the review must explicitly examine the research in light of the outcome measures specified in the Act as described above.

The Assistant Attorney General decided to award a grant to an independent research group to accomplish this mandate. The legislation required that the review's content be "independent in nature," even if provided "directly" (by federal employees) or by independent contractors or grantees. An anonymous panel appointed by NIJ evaluated the proposals submitted in response to the solicitation. On the basis of the peer-review panel's report, the Director of the National Institute of Justice selected the University of Maryland's Department of Criminology and Criminal Justice in early August, 1996 to conduct the Congressionally mandated evaluation due on January 27, 1997.

Once the University of Maryland was selected as the independent contractor, the strategic choices for accomplishing the mandate shifted to the team of six senior scientists who wrote this report. All decisions about the project were left in the hands of the Maryland criminologists, who bear sole responsibility for the work. That responsibility includes the technical choices we made about how to employ "rigorous and scientifically recognized standards and methodologies" most effectively in the limited time available to complete the report. The principal decision was to define the scope of the report as follows:

a critical assessment, based on a growing body of science, of the effectiveness of a wide range of crime prevention strategies, operated at the local level, with and without the support of federal funds.

This report is thus a review of scientific evaluations of categories of local programs and practices that are supported by broad categories of federal funds--often by several different "programs" of funding. Using systematic procedures described in Chapter Two and the appendix, the report attempts to sort the science of local crime prevention programs and practices supported by DOJ. It focuses primarily on the direct evaluation of local program operations, and uses those findings selectively to support indirect and theoretical assessments of some national funding streams based on findings about their specific parts.

Direct Evaluations of Local Program Operations. What rigorous science can evaluate most reliably is the effect of a specific program operated at a local level. This report identifies over 500 studies that attempt to do just that, with varying levels of scientific rigor. In a few areas, the science is rigorous enough, the studies are numerous enough, and the findings are consistent enough for us to draw some reasonably certain and generalizable conclusions about what works, what doesn't, and what is promising at the local level of operation. Such conclusions are not yet possible for most local crime prevention strategies. That fact requires the report to address the starting point for the legislation mandating this report: the need for far greater investment in program evaluation. But the growing OJP support for program evaluation in recent years helps to provide the raw material for the core of this report.

Indirect Evaluations of National Funding. In an effort to be as responsive to the Congress as possible, this report makes selective use of another approach to the scientific method. That approach uses evaluations of local programs to make indirect evaluations of federal funding streams. Those streams vary widely in their diversity, from funding streams of such relatively uniform programs as the hiring of the Crime Act's 100,000 police to very diverse Local Law Enforcement Block Grants program. The extent to which it is scientifically appropriate to generalize upwards from local program evaluations to national funding streams varies as well. In general, the more homogeneous the federal funding stream, the more appropriate it is to evaluate the effectiveness of that funding based on local evaluations.

Theoretical Assessments of Unevaluated Programs. Where no rigorously scientific impact data are available on funding streams expending substantial tax dollars, the report employs theoretical analyses to provide limited assessments of the programs. A prime example is the numerous efforts that OJP is currently making to prevent crime in the concentrated urban ghetto poverty areas producing the majority of serious youth violence in America. These programs attempt to be comprehensive in addressing the crime risk factors in those areas, which allows a comparison of the program content to the available theory and data on risk factors. The need for scientific impact assessments of these programs, however, is critical, and the theoretical assessment should be seen merely as a stopgap approach required by the current lack of measured effects.

Comprehensiveness

This report attempts to be as comprehensive as the available science allows. It is not, however, an annotated list of DOJ local assistance programs with a summary of scientific evidence relating to each one. Such an encyclopedic approach would have several limitations. It would fail to identify important issues cutting across programs. It would fail to give greater attention to the more important crime risk factors identified in the literature. Most important, it would have nothing to say about a great proportion of the specific program components of DOJ local assistance programs, given the lack of available impact evaluations.

While the report attempts some form of scientific commentary for the major DOJ prevention funding streams, it omits direct commentary on many of the smaller diverse funding categories. We attempt not to omit, however, any published program impact evaluations, meeting minimal standards of scientific rigor, that help show indirectly the effectiveness of the DOJ programs. Where such omissions have occurred, we anticipate that can be corrected in a systematic effort to keep the present findings up to date in future years.

What This Report Is Not

The Congressional mandate did not require that this report include an audit of the use of Department Of Justice (DOJ) funds, an evaluation of the leadership of DOJ's Office of Justice Programs (OJP) or Community Oriented Police Services (COPS) office, or a process or descriptive evaluation of specific programs at the local level supported with DOJ funds. None of these tasks fall within the required assessment of the scientific evidence of the effectiveness of local assistance funds administered by DOJ in preventing crime and risk factors.

Not an Audit of DOJ. Congress did not require the Attorney General to provide a detailed accounting of how DOJ local assistance funds are being spent. That kind of analysis requires auditing rather than scientific methodologies; the legislation clearly indicated the use of science. Knowing exactly how much money is being spent on Drug Courts, for example, does not alter the conclusions that can be reached by using scientific methods to examine the available studies of the effectiveness of drug courts. The report's concern with the expenditure of DOJ funds was limited to four questions that informed a scientific assessment:

1) Does DOJ funding support this kind of crime prevention program or practice?

2) If not, does the scientific evidence suggest Congress should consider funding it?

3) Are current funds allocated in relation to scientifically established crime risk factors?

4) Have the funds been allocated in a way that permits scientific impact evaluation?

Not an Evaluation of DOJ Leadership. The term "evaluation" is often understood to mean something like a report card, reflecting on the personal effectiveness of officials directing programs. There is even a substantial scientific literature in the field of industrial psychology for personnel or performance "evaluation" systems. The legislation clearly does not call for a performance evaluation, but for an evaluation of program effectiveness. The Congressional mandate to focus on the science of the programs does not require assessments, positive or negative, about the performance of DOJ leadership. In order to standardize the focus on the evidence, the report does not even employ interviews with DOJ leadership, and relies solely on analysis of legislation, written documents and publications about the programs they administer.

Not A Descriptive or Process Evaluation of DOJ Programs. The Congressional mandate clearly focuses on what scientists call "impact" evaluations, rather than "descriptive" or "process" evaluations. The distinction between the two kinds of evaluation is critical, but often misunderstood. Descriptive or process evaluations describe the nature of a program activity, usually in some detail. An impact evaluation uses scientific methods to test the theory that a program causes a given result or effect. Only an impact evaluation, therefore, can be used to assess the "effectiveness" of a program. Descriptive evaluations can provide useful data for interpreting impact results based on variations in the implementation of programs and interpretations of their effects. But they do not provide a sufficient response to the Congressional mandate.

Not a Technical "Meta-Analysis." Scientists are making increasing use of a statistical methodology called "meta-analysis," in which findings from many studies are analyzed together quantitatively. This method is important because it can produce different conclusions than a summary of findings from individual studies, largely by increasing the sample size available for analysis. There are no currently published statistical meta-analyses comparing the effectiveness of the full array of crime prevention strategies, from Head Start to prisons. There are several meta-analyses on specific crime prevention strategies included in the evidence used for this report. The Congressional requirements for rapid production of this report, however, ruled out a formal meta-analysis of the evaluation results across all crime prevention programs, however.

Evaluating Funding Mechanisms Versus Prevention Programs

The legislation did not define DOJ crime prevention "programs" as the large general funding streams. The focus on effectiveness clearly directs the report to specific crime prevention strategies. A substantial scientific literature is available on the crime prevention effectiveness of the specific strategies. We could find no existing impact evaluation, however, of such general funding streams as the Byrne Memorial State and Local Law Enforcement Assistance Program. This fact raises several key issues: the definition of "programs," the science of varying treatments, and the barriers such variations raise to direct evaluation of internally diverse national funding streams.

Defining "Programs." A major source of confusion in policy analysis of federal crime prevention is the meaning of the word "program." The meanings vary on several dimensions. One dimension is the level of government: if the federal Byrne Program funds a neighborhood watch program in Baltimore, which one is the DOJ "program" this report should evaluate for the Congress: Byrne or Baltimore's neighborhood watch? Or should the evaluation focus fall in between those two levels of analysis, addressing what is known generally about neighborhood watch programs? This report takes the latter approach.

The meanings of the term "program" also vary with respect to the required degree of internal uniformity. Neighborhood watch "programs," for example, are fairly uniform in their content, despite some variations. A national community policing "program," in contrast, embraces a far wider range of activities and philosophies, ranging from aggressive zero tolerance enforcement campaigns "fixing broken windows" (Kelling and Coles, 1996) to outreach programs building partnerships between police and all segments of the community (Skogan, 1990).

Science and Varying Treatments. The tools of the scientific method are only as useful as the precision of the questions they answer. Medical science, for example, evaluates the effectiveness of specific treatments; it is rarely able to establish the controls needed to evaluate broad categories of funding embracing multiple or varying treatments, such as "hospitals" or even "antibiotics." Variations in treatment place major limitations on the capacity of science to reach valid conclusions about cause and effect. The scientific study of aspirin, for example, assumes that all aspirin has identical chemical components; violating that assumption in any given study clearly weakens the science of aspirin effectiveness. The same is true of crime prevention programs. The more a single program varies in its content, the less power science has to draw any conclusions about "the" program's content (Cohen, 1977; Weisburd, 1993).

Compare a study of the effects of a sample of 5,000 men taking aspirin to a study of the same sample taking different pills elected arbitrarily from an entire pharmacy of choices. Any changes in health would be more clearly understood with the aspirin study than with the pharmacy evaluation. Even if the whole pharmacy of pills were taken only on doctor's orders, based on a professional assessment of the most appropriate pills for each patient, wrapping all of the different pills' effects into the same evaluation of effectiveness would prevent an assessment of what effect each medicine had. Science is far more effective at evaluating one kind of pill at a time than in drawing conclusions about different pills based upon a pharmacy evaluation.

Direct Evaluations of National Funding Programs. Any attempt to evaluate directly an internally diverse national funding program is comparable to a pharmacy evaluation. Even if the right preventive treatments are matched to the right crime risks, a national before-and-after evaluation of a funding stream would lack vital elements of the scientific method. The lack of a control group makes it impossible to eliminate alternative theories about why national-level crime rates changed, if at all, with the introduction of a widely diverse national program like the Local Law Enforcement Block Grant. Federal funding of local crime prevention, for example, increased by over five hundred percent from 1994 to 1996, and violent crime has fallen steadily during that period. But violent crime started falling in 1992, for reasons that no criminologist can isolate scientifically. Isolating still further the effects of the increased funding in 1994 is not possible to do with rigorous scientific methods. Thus we could not have evaluated most national DOJ funding programs directly, even if we had been allowed several years or decades.

Implications of This Approach

The choice to start with the available science on local programs rather than the DOJ funding mechanism programs has important implications. One limitation is the report's unavoidable bias towards well-researched programs. One advantage is that the report becomes a reference source for different legislative approaches to federal funding. The approach also becomes a demonstration of how unevenly evaluation science can proceed, and the need for clear distinctions between science and policy analysis.

Bias Towards Well-Researched Programs. The report clearly emphasizes strategies that have received substantial research attention, regardless of their merits in receiving that attention. To the extent that the rigorous science has been focused on less promising crime prevention strategies, both the report and public policymaking are at a disadvantage. The alternative might have been to rely more on theoretical science and less on empirical results. The obvious danger in that course, however, is a risk of losing the objectivity required for reliable assessments. On balance, then, the decision to focus on the strongest scientific evidence seems to be the most useful and least problematic approach available.

A Reference for Diverse Approaches to Federal Funding. Letting science guide the report around local programs may help the findings to have more lasting value. Organizing the evidence around theories and data will provide a reference for many different possible approaches to federal funding of local programs. While the structure of federal funding changes almost annually, the results of program evaluations accumulate steadily over long time periods. While the NIJ solicitation asked for special emphasis to be placed on evaluations completed in the last five years, many of the most important evaluation results are older than that. Omitting those earlier studies from the analysis would have substantially and inappropriately altered the conclusions reached. Similarly, Congressional deliberations on crime prevention policy can benefit from a reference source organized around the basic institutional settings for local crime prevention: communities, families, schools, labor markets, specific places, police, and criminal justice.

The Uncertainty of Science. Guiding the report with available findings offers a more realistic picture of what evaluation science is able to achieve. As the U.S. Supreme Court recently concluded, hypotheses about cause and effect cannot be "proven" conclusively like a jury verdict; they can merely be falsified using a wide array of methods that are more or less likely to be accurate.5 A Nobel Laureate observes that "Scientists know that questions are not settled; rather, they are given provisional answers..."6 Science is a constant state of double jeopardy, with repeated trials often reaching contradictory results. Fulfilling the mandate to evaluate will always result in an uneven growth of evaluation results, not permanent guidance. This report directly confronts the problems of mixed results from methods of varying scientific rigor, and attempts to develop decision rules for applying the findings to both research and program policy. These rules may have value not just for this report. They may also help advance the Congressional mandate to evaluate beyond the nonscientific concept of "proven" effectiveness to the scientific concept of "likely" effectiveness.

This problem of accurately predicting the effects of a program wherever it may be implemented is an important limitation to using evaluations in policy analysis. Generalizing results from an evaluation in one city to the effects of a program in another city is a very uncertain enterprise. We still lack good theories and research to predict accurately when findings can be accurately generalized. Just as the Justice Department may fund different kinds of community policing programs, the same program may be very different in different places. The nature of a "drug court" may vary enormously from one judge to the next, community policing home visits may vary from friendly to intrusive, gang prevention programs may have different effects in different kinds of neighborhoods or ethnic groups. This uncertainty is best acknowledged, and addressed by ongoing evaluations of even programs with enough evidence to be judged "likely" to "work."

Science Versus Policy Analysis. The focus on scientific results should help the reader distinguish between the report's science and its policy analysis. The distinction is crucial. Even though scientific evaluation results are a key part of rational policy analysis, those results cannot automatically select the best policy. This is due not just to the scientific limitations of generalizing results from one setting to the next. Another reason is that evaluations often omit key data on cost-benefit ratios; the fact that a program is "effective" may be irrelevant if the financial or social costs are too high. This report attempts, where possible, to distinguish summaries of science from their application to policy issues using judgment and other sources of information outside the evaluation results. We expect that there will be less consensus about the policy analysis than about the scientific findings. But we also determined after extensive deliberation that recommendations based on policy analysis were a useful addition to the purely scientific summaries that form the core of the report.

The framework adopted for this report is not the only possible way to have responded to the Congressional request. There are legitimate differences of opinion about how best to use scientific methods for this kind of analysis. Some analysts have argued for a more "flexible" approach to program evaluation, with more emphasis on expert insight and less emphasis on whether a program "works" (Pawson and Tilley, 1994). Others call for less reliance on evaluation results that have less rigorous measurement of program context and other data needed to assess the generalizability of results (Ekblom and Pease, 1995). Our own preference would have been to raise the cutoff point for defining "scientific" methods much higher than we actually did (see Chapter Two). On balance, however, this approach provides an acceptable compromise between the Congressional needs for information and the scientific strength of available evidence.

There are also multiple goals for the $4 Billion annual funding described in this report, which may be valuable for other reasons besides its scientifically measurable effectiveness in preventing crime. The focus on crime prevention excludes the very important goals of justice, fairness and equality under the law. That limitation is not inherent in the science of program evaluation; it is merely a function of the boundaries of the specific mandate for this report.

LOCAL CRIME PREVENTION AND THE DEPARTMENT OF JUSTICE

The policy context for this report is the current structure of local crime prevention assistance programs funded by the U.S. Department of Justice. This section provides a brief introduction to those programs. It begins with a summary of the appropriated budgets for local crime prevention in fiscal year 1996, the year the Congress requested this report. It then describes the administrative structure of the Justice Department offices administering those funds. It concludes with a brief discussion of the types of funding mechanisms Congress has created for distributing the funding, and briefly details the focus and mechanisms of the largest of the funding programs.7

Budget

Local crime prevention offices now receive more DOJ funding than at any time in American history, a larger budget than the FBI, the DEA, or the INS. Among all DOJ components, only the Federal Bureau of Prisons consumes a larger share of the budget. At $4 billion per year, the combined annual budget of the $1.4 billion administered by the Director of the COPS (Community-Oriented Policing Services) Office and the $2.6 Billion administered by the Assistant Attorney General for OJP (the Office of Justice Programs) is more than five times the amount the Congress allocated in the peak years of the old Law Enforcement Assistance Administration.

Not all of these funds can be classified as having crime prevention purposes. The largest of these programs, the 1994 Crime Act's Title I Community Policing grants, does not even specify the prevention of youth violence as a legislative purpose of the funding, even though many observers would expect youth violence prevention to result from the program. The definition of crime prevention as an intention or a result is a major issue addressed in Chapter Two, which explains this report's rationale in using a definition focused on results. This definition thus clearly include the 100,000 police. But even that broad definition does not include the State $300 State Criminal Alien Assistance Program, reimbursing states for housing 38,000 illegal aliens incarcerated for felony offenses, or the $31 million Public Safety Officers Benefits program for families of police slain in the line of duty. Nor does it include infrastructure programs for courts and computerization of criminal justice records, general programs of statistics, research and evaluation, services to victims of crime, the Police Corps, or general administrative costs. As Figure 1-1 shows, the major crime prevention funding programs within DOJ added up to about 85% of the $4 billion total appropriations for the two local assistance offices (OJP and COPS), or about $3.4 billion. The historical context of these appropriations levels is indicated in Figure 1-2, which shows the three-decade trends in total DOJ funding of its local crime prevention assistance offices (including services other than crime prevention).

The Department of Justice funding of local programs which may result in crime prevention are authorized under several different Acts of Congress. The Juvenile Justice and Delinquency Prevention Act is the oldest, having continued in force after the end of the Law Enforcement Assistance Administration. The 1988 Anti-Drug Abuse Act of 1988 authorized the Byrne Grants program to the states, followed by the 1994 Crime Act which took the local prevention funding to its current historic heights. The five principal titles of the 1994 Act include Public Safety and Policing (Title I), Prisons (Title II), Crime Prevention (Title III), Violence Against Women (Title IV), and Drug Courts (Title V). While this report treats all five titles as falling within a results-based scientific definition of crime prevention, it is worth noting that the Congress has never appropriated any funds specifically labeled as "crime prevention" under Title III. Both the 1988 Anti-Drug Abuse Act and the 1996 Omnibus Appropriations Act, however, appropriated funds allowing grants to be made in a "purpose area" labeled crime prevention.

Figure 1-1

Major DOJ Crime Prevention Funding Programs

OFFICE & BUREAU FUNDING PROGRAMS FY 1996 Funding Community-Oriented Policing 100,000 Local Police $1.4 Billion Services Office of Justice Programs Bureau of Justice Local Law Enforcement Block Grant $488 Million Assistance Formula Program Byrne Memorial State and Local Law $475 Million Enforcement Assistance Formula Program Byrne Discretionary Grants Program: $32 Million (Boys and Girls Clubs Earmark) ($ 4 Million) (Nat'l. Crime Prevention Council ($ 3 Million) Earmark) (DARE Drug Abuse Prevention Earmark) ($ 2 Million) Office of Juvenile Justice Juvenile Justice Formula Grant Program $70 Million and Delinquency Prevention Competitive Grants Programs $69 Million Executive Office of Weed Operation Weed and Seed $28 Million and Seed Violence Against Women STOP (Services, Training, Officers, and $130 Million Grants Office Prosecution) Violence Against Women Formula Grant Program Rural Domestic Violence Enforcement $ 7 Million Encourage Arrest Program $ 28 Million Corrections Program Office Residential Substance Abuse Treatment $ 27 Million Violent Offender Truth in Sentencing $405 Million Prison Construction Formula Grants Drug Courts Program Office Drug Courts Competitive Grants $ 15 Million Total Major Funding $3.2 Billion

Administrative Structure

The administration of these various programs under various Acts is organized into the two separate offices. One of these--the Office of Community-Oriented Policing Services--has a single large program and a single presidential appointee. The other--the Office of Justice Programs--has numerous programs ranging widely in size, managed by an Assistant Attorney General, two Deputy Assistant Attorneys General, and five Presidentially appointed directors or administrators of the following units the Bureau of Justice Assistance (BJA), the Bureau of Justice Statistics (BJS), the National Institute of Justice (NIJ), the Office of Juvenile Justice and Delinquency Prevention (OJJDP), and the Office for Victims of Crime (OVC). In addition, several other OJP offices manage funding under separate Titles of the 1994 Crime Act: the Corrections Programs Office, the Office for Drug Courts, and the Violence Against Women Grants Office. The OJP Executive Office of Weed and Seed is supported by transfers of BJA Byrne Discretionary Grant appropriations under the 1988 Anti-Drug Abuse Act. Figure 1-1 summarizes the administrative and programmatic structure of the agencies administering the major local crime prevention programs. NIJ and BJS do not administer major local assistance grants for crime prevention purposes, although BJS does assist states in their implementation of the data systems requirements for compliance with the Brady Act. The Office of Vicitms of Crime is funded by fines collected by federal courts, and provides funding mostly for repairing the harm cuased by crime; a few areas of potential crime prevention effects from OVC funding, such as its support for battered women's shelters, are noted in Chapter Four.

Funding Mechanisms: Formula, Discretionary, Earmarks, Competitive

The crucial point in understanding DOJ local crime prevention funding programs is the statutory plan for allocating the funding. The "funding mechanisms" of this plan vary across the different authorization Acts, and use different criteria even within each funding mechanism depending on the specific Act. Two basic types of funding mechanisms are "formula" or "block" grants versus "discretionary" grants. Many observers and grant recipients incorrectly assume these labels mean that local units are entitled to their funding under formula grants, while DOJ executives decide how to administer the discretionary grants. That assumption is incorrect. There are substantial legislative requirements constraining DOJ's allocation of "discretionary" funds, and there are also various legislative requirements that grantees must satisfy in order to become eligible to receive their "formula" funding.

The so-called Discretionary programs are constrained by Congress in three ways: earmarks, eligibility criteria, and competition. Earmarks are legislative directions in the Appropriations laws (as distinct from Authorization Acts) on how to spend certain portions of funds appropriated within a larger funding program, such as the $11 million earmark for Boy's and Girls Clubs within the 1996 appropriation for the BJA Local Law Enforcement Block Grant Program and the $4.35 Million earmark for the same organization under the Byrne Discretionary grants. Earmarks are both "hard" and "soft." Hard earmarks are written into legislation, usually with specific amounts to be spent and the specific recipient of the funding identified. Soft earmarks are based upon committee hearings and conference reports, such as the legislation for the present report, with or without specified amounts.

Eligibility criteria programs are only "discretionary" in the sense that DOJ officials must decide whether the applicants are eligible to receive the funds for which they apply. The applicants do not receive the funds unless they apply, and can demonstrate their eligibility in the application. Congress often requires, for example, that states pass certain state laws as a condition of eligibility for receiving federal funds under certain grant programs. The most famous example is perhaps the limitation of maximum state speed limits to 55 miles per hour that was for two decades an eligibility requirement for receiving federal highway construction funding. Similarly, the 1994 Crime Act makes state passage of "Truth-in-Sentencing" Legislation an eligibility requirement for prison construction grants. Once DOJ has proof of program eligibility, however, the determination of how much funding the applicant receives must follow the statutory allocation plan. All those receiving funds do so on the basis of a "formula" that may be based on population, crime rates, prison overcrowding rates or other factors. In addition, certain minimum amounts are often reserved for jurisdictions of certain size irrespective of the formula, such as the requirement that half of all funding for the 100,000 police be allocated to applicants from cities of over 150,000 people. In that particular case, the allocation is made at least in part on a first-come, first served basis.8 Thus a more accurate label for such funding mechanisms might be "discretionary eligibility formula grants."

Only ten percent of the total OJP appropriation is for competitive grants, the truly discretionary programs in which applicants must compete on the merits of issues other than simple eligibility for funding. DOJ officials usually establish different criteria appropriate for each program. Examples of criteria for these grants include innovative approaches, interagency collaboration, comprehensive targeting of crime risk factors, and potential impact of the program on the community. Examples of competitive local assistance programs include Drug Courts, Operation Weed and Seed, JUMP mentoring grants and Encourage Arrest Grants.

Formula grant programs, in contrast to discretionary programs, have no so-called "eligibility" requirements, such as the passage of state laws. The allocation of funding is independent of such tests. Formula programs can, however, require that certain paperwork be satisfactorily completed. BJA Byrne grants, for example, require that an annual plan specify how the formula-determined allocation will be spent, and that evaluations of all grants made with formula allocations be forwarded to BJA. Failure to satisfy these requirements presumably has the same effect as in "discretionary eligibility" programs, which is to block the award of the funds.

These funding mechanisms offer relatively little discretion to DOJ in its choice of program areas or sites, but offers substantial direction to the state and local grant recipients. That policy choice is central to a continuing Congressional debate. Its relevance to this report is to show the centrality of the local programs chosen by the grant recipients in determining the effectiveness of this funding. It is the local decisions on which prevention programs to adopt, and not the Congressionally mandated actions by DOJ in allocating that funding, which largely determine the effectiveness of these broad funding streams in preventing crime.

Major Funding Stream Programs

This section briefly describes the major DOJ funding stream programs listed in Figure 1-1.

COPS. This program reimburses local police agencies for up to 75% of the salary and benefits of an additional police officer for three years, up to a maximum of $75,000 per officer. It is a discretionary-eligibility-formula grant program in which funding is allocated on the basis of eligible applicant population size, with a minimum allocation requirement that 50 percent of the funds go to police departments serving cities of over 150,000 people. In addition to this "Universal Hiring Program" to which the Congress has restricted appropriations in 1997, the earlier years of the program offered various competitive grant programs for domestic violence, youth firearms, anti-gang initiatives, and other special purposes.

Byrne (BJA). The 1988 Anti-Drug Abuse Act established both formula and discretionary grant programs in memory of New York City Police Officer Edward Byrne, who was murdered while monitoring a crack house. The formula program awards funds to states developing plans for allocating grants, originally under 21 and now under 26 purpose areas: 1) drug demand reduction programs involving police, 2) multijurisdictional task forces against drugs, 3) domestic drug factory targeting, 4) community crime prevention, 5) anti-fencing programs, 6) white-collar and organized crime enforcement, 7) law enforcement effectiveness techniques, 8) career criminal prosecution, 9) financial investigations, 10) court effectiveness, 11) correctional effectiveness, 12) prison industries, 13) offender drug treatment, 14) victim-witness assistance, 15) drug control technology, 16) innovative enforcement, 17) public housing drug markets, 18) domestic violence, 19) evaluations of drug control programs, 20) alternatives to incarceration, 21) urban enforcement of street drug sales, 22) DWI prosecution, 23) juvenile violence prosecution, 24) gang prevention and enforcement, 25) DNA analysis, 26) death penalty litigation. While each state is eligible to receive a minimum of 0.25 percent of total appropriations, the balance is allocated on the basis of state population as a proportion of the entire U.S. All Byrne funds must be matched by a 25% commitment of non-federal funds.

The BJA Byrne Discretionary Grants program is heavily earmarked for initiatives such as those indicated in Figure 1-1 (e.g., Boys and Girls Clubs, DARE) as well as programs well-established with Congressional understanding, such as Weed and Seed (see below). Over 5 percent of Byrne discretionary funds ($3.1 million) went to program evaluation purposes in FY 1996, with another $3.5 million allocated to program evaluation by the States from their formula grants.

Local Law Enforcement Block Grants (BJA). This is a formula grant program that awards funds to applying local governments based on their share of the their state's total Part I violent offenses (homicide, rape, robbery, aggravated assault) over the previous three years. The eight purpose areas for local expenditure of the grants are 1) police hiring, 2) police overtime, 3) police equipment and technology, 4) school security measures, 5) drug courts, 6) violent offender prosecution, 7) multijurisdictional task forces, community crime prevention programs involving police-community collaboration.

STOP Violence Against Women Block Grants (VAWGO). This is a formula grant program allocating funding to states and territories based upon population. Within each state, the grants must total at least 25% for law enforcement, prosecution, and victim services. A wide range of programs fall within each of these categories, including both domestic and stranger violence against women.

Encourage Arrest Grants (VAWGO). This is a competitive program for which eligibility is determined by the passage of certain state laws concerning the arrest of suspects about whom there is probable cause to believe they have committed an act of domestic violence or a related offense. These grants are intended to encourage communities to adopt innovative, coordinated practices that foster collaboration among law enforcement officers, prosecutors, judges, and victim advocates to improve the response to domestic violence.

Operation Weed and Seed (EOWS). This is a competitive program funded by a transfer of BJA discretionary Byrne funding to the OJP Executive Office of Weed and Seed. The program consists of long-term funding to a varying number of selected cities to help them create a comprehensive program of reducing crime in small, high-crime areas. The DOJ funding operates as seed money leveraging additional federal, state, local and private resources.

Juvenile Justice Formula Grants (OJJDP). This program provides annual funding to eligible states to deinstitutionalize status offenders, separate juveniles and adults in secure correctional facilities, jails and lockups, and to reduce the number of juveniles in secure facilities.

Prison Construction Grants (Corrections Office). This program provides funds to states to build more prison cells or to construct less expensive space for nonviolent offenders, to free space in secure facilities for more violent offenders.

Residential Correctional Drug Abuse Treatment (Corrections Office). This funding program funds state prison delivery of substance abuse treatment to inmates.

THE STATUTORY PLAN FOR PROGRAM IMPACT EVALUATION

In theory, one of the most effective federal crime prevention programs is the evaluation of local programs. The Attorney General's Task Force on Violent Crime called it the central role of the federal government in fighting crime, the one function that could not be financed or performed as efficiently at the local level.9 With less than one percent of local criminal justice budgets supported by the federal government (not counting the COPS program), federal funds are arguably most useful as a stimulus to innovation that makes the use of local tax dollars more effective (Dunworth, et al, 1997). The three-decade old Congressional mandate to evaluate is consistent with that premise. Its implication is that a central purpose of federal funding of operations is to provide strong evaluations.

The Congressional mandate for this report therefore includes an evaluation of the effectiveness of DOJ-funded program evaluation itself. The central question is whether those evaluations have "worked" as a federal strategy for assisting local crime prevention. The report answers that question in a different fashion from the method used to evaluate the direct local assistance funding. Rather than directly evaluating the impact of program evaluations on crime, the report indirectly examines the antecedent question of whether those evaluations have succeeded in producing published and publicly accessible scientific findings about what works to prevent crime. After presenting the scientific framework for the review in Chapter two, the report presents the evidence for both program and evaluation effectiveness in Chapters Three through Nine. Chapter Ten then summarizes the limited evidence on local program effects, and returns to the underlying issue of how to accomplish the Congressional Mandate to evaluate.

This report concludes that the current statutory plan for accomplishing that mandate is inadequate, for scientific reasons not addressed by current legislation. That inadequacy substantially limits the capacity to judge the effectiveness of the federal effort to reduce serious crime and youth violence. Part of the statutory problem is simply inadequate funding. While Figure 1-2 shows the steep rise in total federal support for local crime prevention operations, Figure 1-3 shows a rough indication of the declining proportionate support for research and evaluation: the percentage of total OJP appropriations allocated to the National Institute of Justice.

Figure 1-3 actually overstates the amount of DOJ funding allocated to program evaluations. Program evaluations are also funded by OJJDP and BJA,10 and actual NIJ expenditure in FY 1996 was $99 million rather than $30 (due to inter-agency transfers).11 But Figure 1-3 reflects the total NIJ budget for all research, technical assistance, and dissemination purposes, as a well as for program evaluation; only 27 percent ($8 million) of NIJ's FY 1996 appropriation was allocated to evaluation. The proportionate allocation of the NIJ budget to evaluation over the past three decades has not changed substantially on this point. Thus while Figure 1-3 overstates the absolute dollars DOJ has been appropriated for evaluation, it is still an accurate portrayal of the absence of statutory attention to keeping evaluation funding commensurate with operational funding.

Evaluation funding alone, however, cannot increase the strength of scientific evidence about the effects of federally funded local programs on crime. Chapter Ten documents the need for adequate scientific controls on the expenditures of program funds in ways that allow careful impact evaluation. A statutory plan earmarking a portion of operational funds for strong scientific program evaluation is the only apparent means for increasing the effectiveness of federal funding with better program evaluations. The basis for this conclusion is central to scientific thinking about crime prevention, as the next chapter shows.

NOTES

142 U.S.C. 3782 Sec. 801 (b) (1), (19), (20).

2U.S. Attorney General's Task Force on Violent Crime, 1981, p. 73.

3In 1988, for example, more than 30 big city police chiefs asked Congress to earmark ten percent of the Anti-Drug Abuse Act funds for research and evaluation. While Titles I and II of the 1994 Crime Act authorize DOJ to spend up to 3 percent of funds for assorted purposes including evaluation, there has never been a requirement to spend a percentage of operational funds exclusively on program impact evaluations demonstrating crime prevention effectiveness.

4104th Congress, First Session, House of Representatives, Report 104-378.

5Daubert v. Merrell Dow Pharmaceuticals, Inc., 113 S. Ct. 2786, 125 L. Ed. 2d 469 (1993), in which the Court adopts the scientific framework offered by Karl Popper, Conjectures and Refutations: The Growth of Scientific Knowledge , 5th Ed., 1989.

6David Baltimore, "Philosophical Differences," THE NEW YORKER, January 27, 1997, p. 8.

7This section is based largely upon a January 17, 1997 NIJ background memorandum from Jane Wiseman to Christy Visher prepared at the University of Maryland's request.

8U.S. Department of Justice, Office of Community Oriented Policing Services, COPS Facts: "Cops More '96." Update September 18, 1996.

9Attorney General's Task Force on Violent Crime, Report , 1981; James Q. Wilson, "What, if Anything, Can the Federal Government Do About Crime?" Presentation in the Lecture Series on Perspectives on Crime and Justice, sponsored by the National Institute of Justice with support from the Edna McConnell Clark Foundation, December, 1996.

10Total BJA expenditures on program evaluation in FY 1996 were $6.6 million.

11Actual NIJ expenditures on all purposes included transfers authorized by the Assistant Attorney General for the Office of Justice Programs from Crime Act appropriations of $15.6 million in FY 1995 and $51.9 million in FY 1996.

REFERENCES

Blumstein, Alfred, Cohen, Jacqueline, and Daniel Nagin (eds).

1978 Deterrence and Incapacitation: Estimating The Effects of Criminal Sanctions on Crime Rates. Washington, DC: National Academy of Sciences.

Cohen, J.

1977 Statistical Power Analysis for the Behavioral Sciences. N.Y.: Academic Press.

Ekblom, Paul and Ken Pease

1995 Evaluating Crime Prevention. In Michael Tonry and David Farrington, eds., Building a Safer Society: Strategic Approaches to Crime Prevention. Crime and Justice, Vol. 19. Chicago: University of Chicago Press.

Feeley, Malcolm and Austin Sarat

1980 The Policy Dilemma: Federal Crime Policy and the Law Enforcement Assistance Administration. Minneapolis: University of Minnesota Press.

Kelling, George and Katharine Coles

1996 Fixing Broken Windows. NY: Free Press.

Pawson, R. and N. Tilley

1994 What Works in Evaluation Research. British Journal of Criminology 34: 291-306.

Reiss, Albert J., Jr. and Jeffrey Roth (eds.)

1993 Understanding and Preventing Violence. Washington, D.C.: National Academy of Sciences.

Skogan, Wesley

1990 Disorder and Decline. NY: Free Press.

Weisburd, David with Anthony Petrosino and Gail Mason

1993 Design Sensitivity in Criminal Justice Experiments: Reassessing the Relationship Between Sample Size and Statistical Power. In Michael Tonry and Norval Morris, eds., Crime and Justice, Vol. 17. Chicago: University of Chicago Press.

Chapter Two

THINKING ABOUT CRIME PREVENTION

Lawrence W. Sherman

How effective at preventing crime are local programs with funding from the US Department of Justice? That question can only be answered in the context of a comprehensive scientific assessment of crime prevention in America. That assessment shows that most crime prevention results from the web of institutional settings of human development and daily life. These institutions include communities, families, schools, labor markets and places, as well as the legal institutions of policing and criminal justice. The vast majority of resources for sustaining those institutions comes from private initiative and local tax dollars. The resources contributed to these efforts by the federal government are almost negligible in comparison. The potential impact on local crime prevention of federally supported research and program development, however, is enormous.

The logical starting point for assessing the current and potential impact of federal programs is the scientific evidence for the effectiveness of crime prevention practices in each institutional setting. This requires, in turn, great attention to the enormous variation in the strength of scientific evidence on each specific practice or program. In general, far too little is known about the impact of crime prevention practices, regardless of how they are funded. But thanks largely to evaluations sponsored by the National Institute of Justice (NIJ), the Office of Juvenile Justice and Delinquency Prevention (OJJDP) and other federal agencies, the body of scientific evidence has grown much stronger in the past two decades. Most important, it has shown a steadily increasing capacity to provide very strong scientific evidence, even while most program evaluations remain so weak as to be scientifically useless.

The growing scientific evidence that federal support has produced allows us to assess some programs more intensively than others. Some of the evidence is strong enough to identify some effective and ineffective practices or programs in most institutional settings. Some evidence is more limited, but clearly points to some promising initiatives that merit further research and development. Reviewing this evidence in each of the seven institutional settings provides the strongest possible scientific basis for responding to the Congressional mandate. By separating the question of effectiveness from the question of funding, we map out the entire territory of crime prevention knowledge (including the many uncharted areas). That, in turn, provides a basis for locating both current and future Justice Department programs on that map.

Chapters Three through Nine of the report each examine the evidence in one institutional setting at a time. Each chapter draws scientific conclusions about program effectiveness, then uses those findings to suggest policy recommendations for both current programs and further research. Chapter Ten then assembles the major findings into the Congressionally-mandated assessment of the effectiveness of DOJ crime prevention programs. It concludes the report with the implications of the assessment for the federal role in generating just such evidence, and suggests a statutory plan for improving scientific knowledge about effective crime prevention methods.

This chapter provides the four cornerstones on which the report is based. One is the crucial difference between the political and scientific definitions of crime prevention. Making this distinction at the outset is essential for meeting the Congressional mandate for a scientific assessment. It also helps us clarify other key concepts in thinking about crime prevention.

A second cornerstone is the web of institutional settings in which crime prevention effects are created every day all over the nation, mostly without any taxpayer involvement at all. From childhood moral education to employee criminal history checks, there is tight social fabric holding most people back from committing crimes most of the time. Yet there are many holes and thin spots in that social fabric that crime prevention programs might, and sometimes do, address.

The third cornerstone is the logical basis for separating scientific wheat from chaff, or strong scientific evidence from weak or useless data. Not all crime prevention evaluations are created equal, but we must be clear about the rules of evidence.

The fourth and final cornerstone is the history and current status of the federal role in guiding and funding local crime prevention. The distinction between those functions should be kept in mind in any discussion of the implications of crime prevention research for federal policy.

KEY CONCEPTS IN CRIME PREVENTION

Crime prevention is widely misunderstood. The national debate over crime often treats "prevention" and "punishment" as mutually exclusive concepts, polar opposites on a continuum of "soft" versus "tough" responses to crime: midnight basketball versus chain gangs, for example. The science of criminology, however, contains no such dichotomy. It is as if a public debate over physics had drawn a dichotomy between flame and matches. Flame is a result. Matches are only one tool for achieving that result. Other tools besides matches are well known to cause fuel to ignite into flame, from magnifying glasses to tinder boxes.

Similarly, crime prevention is a result, while punishment is only one possible tool for achieving that result. Both midnight basketball and chain gangs may logically succeed or fail in achieving the scientific definition of crime prevention: any policy which causes a lower number of crimes to occur in the future than would have occurred without that policy.1 Some kinds of punishment for some kinds of offenders may be preventive, while others may be "criminogenic" or crime-causing, and still others may have no effect at all. Exactly the same may also be true of other programs that do not consist of legally imposed punishment, but which are justified by a goal of preventing crime.

Crime prevention is therefore defined not by its intentions, but by its consequences. These consequences can be defined in at least two ways. One is by the number of criminal events; the other is by the number of criminal offenders (Hirschi, 1987). Some would also define it by the amount of harm prevented (Reiss and Roth, 1993: 59-61) or by the number of victims harmed or harmed repeatedly (Farrell, 1995). In asking the Attorney General to report on the effectiveness of crime prevention efforts supported by the Justice Department's Office of Justice Programs, the U.S. Congress has embraced an even broader definition of crime prevention: reduction of risk factors for crime (such as gang membership) and increases in protective factors (such as completing high school)--concepts that a National Academy of Sciences report has labeled as "primary" prevention (Reiss and Roth, 1993: 150). What all these definitions have in common is their focus on observed effects, and not the "hard" or "soft" content, of a program.

Which definition of crime prevention ultimately dominates public discourse is a critically important factor in Congressional and public understanding of the issues. If the crime prevention debate is framed solely in terms of the symbolic labels of punishment versus prevention, policy choices may be made more on the basis of emotional appeal than on solid evidence of effectiveness. By employing the scientific definition of crime prevention as a consequence, this report responds to the Congressional mandate to "employ rigorous and scientifically recognized standards and methodologies."2 This report also attempts to broaden the debate to encompass the entire range of policies we can pursue to build a safer society. A rigorously empirical perspective on what works best is defined by the data from research findings, not from ideologically driven assumptions about human nature.

Bringing more data into the debate has already altered public understanding of several other complex issues. The prevention of disease, for example, has gained widespread public understanding of the implications of new research findings, especially those about lifestyle choices (like smoking, diet and exercise) that people can control themselves. The prevention of injury through regulation of automobile manufacturers has increasingly been debated in terms of empirically observed consequences, rather than logically derived theories; the safety of passenger-side airbags, for example, has been debated not just in terms of how they are supposed to work, but also in terms of data on how actual driver practices make airbags increasingly cause the deaths of young children.3 Emotional and ideological overtones of personal freedom and the role of government clearly affect debates about disease and injury prevention, but scientific evidence appears to have gained the upper hand in those debates.

Similarly, the symbolic politics of crime prevention could eventually give way to empirical data in policy debates (Blumstein and Petersilia, 1995). While the emotional and symbolic significance of punishment can never be denied, it can be embedded in a broader framework of crime prevention institutions and programs that allows us to compare value returned for money invested (Greenwood, et al, 1996). Even raising the question of cost-effectiveness could help focus policy-making on empirical consequences, and their implications for making choices among the extensive list of crime prevention efforts.

The value of a broad framework for analyzing crime prevention policies is its focus on the whole forest rather than on each tree. Most debates over crime prevention address one policy at a time. Few debates, either in politics or in criminology, consider the relative value of all prevention programs competing for funding. While scientific evidence may show that two different programs both "work" to prevent crime, one of the programs may be far more cost-effective than another. One may have a stronger effect, cutting criminal events by 50% while the other cuts crimes by only 20%. Or one may have a longer duration, reducing crimes among younger people whose average remaining lifetime is 50 years, compared to a program treating older people with an average remaining life of twenty years. A fully informed debate about crime prevention policy choices requires performance measures combining duration and strength of program effect. While such accurate measures of "profitability" and "payback" periods are a standard tool in business investment decisions, they have been entirely lacking in crime prevention policy debates.

Yet comparative measurement is not enough. Simply comparing the return on investment of each crime prevention policy to its alternatives can mask another key issue: the possible interdependency between policies, or the economic and social conditions required for a specific policy to be effective. Crime prevention policies are not delivered in a vacuum. A Head Start program may fail to prevent crime in a community where children grow up with daily gunfire. A chain gang may have little deterrent effect in a community with 75% unemployment. Marciniak (1994) has already shown that arrest for domestic violence prevents crime in neighborhoods with low unemployment and high marriage rates--but arrest increases crime in census tracts with high unemployment and low marriage rates. It may be necessary to mount programs in several institutional settings simultaneously--such as labor markets, families and police--in order to find programs in any one institution to be effective.

One theory is that the effectiveness of crime prevention in each of the seven institutional settings depends heavily on local conditions in the other institutions. Put another way, the necessary condition for successful crime prevention practices in one setting is adequate support for the practice in related settings. Schools cannot succeed without supportive families, families cannot succeed without supportive labor markets, labor markets cannot succeed without well-policed safe streets, and police cannot succeed without community participation in the labor market. These and other examples are an extension of the "conditional deterrence" theory in criminology (Tittle and Logan, 1973; Williams and Hawkins, 1986), which claims that legal punishment and its threat can only be effective at preventing crime if reinforced by the informal social controls of other institutions. The conditional nature of legal deterrence may apply to other crime prevention strategies as well. Just as exercise can only work properly on a well-fed body, crime prevention of all kinds may only be effective when the institutional context is strong enough to support it.

Over a century ago, sociologist Emile Durkheim suggested that "it is shame which doubles most punishments, and which increases with them" (Lukes and Scull, 1983, p. 62). More recently, John Braithwaite (1989) has hypothesized the institutional conditions needed to create a capacity for shame in both communities and individuals. He concludes that shame and punishment have been de-coupled in modern society, and suggests various approaches to restoring their historic link. His conclusions can apply to non-criminal sanctions as well, such as school discipline, labor force opportunities, expulsion from social groups and ostracism by neighbors and family. Conversely, it applies to rewards for compliance with the criminal law, such as respectability, trust, and responsibility. The emotional content of winning or losing these social assets is quite strong in settings where crime prevention works, but weak or counterproductive in what social scientists call "oppositional subcultures." Any neighborhood in which going to prison is a mark of prestige (Terry, 1993) is clearly a difficult challenge for any crime prevention practice.

The community context of crime prevention may need a critical mass of institutional support for informally deterring criminal behavior. Without that critical mass, neither families nor schools, labor markets nor places, police nor prisons may succeed in preventing crime. Each of these institutions may be able to achieve marginal success on their own. While most American communities seem to offer sufficient levels of institutional support for crime prevention, serious violence is geographically concentrated in a small number of communities that do not. Lowering national rates of violent crime might require programs that address several institutional settings simultaneously, with a meaningful chance of rising to the threshold of "social capital" (Coleman, 1992) needed to make crime prevention work.

To the extent that this theory focuses resources on the relative handful of areas falling below that threshold, that focus can be justified by its benefits for the wider society. Over half of all homicides in the US occur in just 66 cities, with one-quarter of homicides in only eight cities (FBI, 1994). These murders are concentrated in a small number of neighborhoods within those cities. The public health costs of inner-city violence, by themselves, could provide sufficient justification for suburban investment in inner-city crime prevention. If crime can be substantially prevented or reduced in our most desperate neighborhoods, it can probably be prevented anywhere.

By suggesting that the effectiveness of some crime prevention efforts may depend upon their institutional contexts, we do not present a pessimistic vision of the future. While some might say that no program can work until the "root causes" of crime can be cured, we find no scientific basis for that conclusion--and substantial evidence against it. What this report documents is the potential for something much more precise and useful, based on a more open view of the role of scientific evaluation in crime prevention: a future in which program evaluations carefully measure, and systematically vary, the institutional context of each program. That strategy is essential for a body of scientific knowledge to be developed about the exact connections between institutional context and program effectiveness.

We expect that greater attention to the interdependency of institutions may help us discover how to shape many institutional factors simultaneously to prevent crime--more successfully than we have been able to do so far. The apparent failure of a few efforts to do just does not mean that we should give up our work in that direction. Such failures marked the early stages of almost all major advances in science, from the invention of the light bulb to the development of the polio vaccine. The fact that our review finds crime prevention successes in all of seven of the institutional settings suggests that even more trial and error could pay off handsomely. Our national investment in research and development for crime prevention to date has been trivial (Reiss and Roth, 1993), especially in relation to the level of public concern about the problem. Attacking the crime problem on many institutional fronts at once should offer more, not fewer, opportunities for success.

Defining crime prevention by results, rather than program intent or content, focuses scientific analysis on three crucial questions:

1. What is the independent effect of each program or practice on a specific measure of crime?

2. What is the comparative return on investment for each program or practice, using a common metric of cost and crimes prevented?

3. What conditions in other institutional settings are required for a crime prevention program or practice to be effective, or which increase or reduce that effectiveness?

The current state of science barely allows us to address the first question; it tells us almost nothing about the second or third. Just framing the questions, however, reveals the potential contribution that federal support for crime prevention evaluations could offer. That potential may depend, in turn, on a clear understanding of the location of every crime prevention practice or program in a broad network of social institutions.

THE INSTITUTIONAL SETTINGS OF CRIME PREVENTION

Crime prevention is a consequence of many institutional forces. Most of them occur naturally, without government funding or intervention. While scholars and policymakers may disagree over the exact causes of crime, there is widespread agreement about a basic conclusion: strong parental attachments to consistently disciplined children (Hirschi, 1995) in watchful and supportive communities (Braithwaite, 1989) are the best vaccine against street crime and violence. Schools, labor markets and marriage may prevent crime, even among those who have committed crime in the past (Sampson and Laub, 1993), when they attract commitment to a conventional life pattern that would be endangered by criminality. Each person's bonds to family, community, school and work create what criminologists call "informal social control," the pressures to conform to the law that have little to do with the threat of punishment. Informal controls threaten something that may be far more fearsome than simply life in prison: shame and disgrace in the eyes of other people you depend upon (Tittle and Logan, 1973).

The best evidence for the preventive power of informal social control may be the millions of unguarded opportunities to prevent crime which are passed up each day (Cohen and Felson, 1979). Given the fact that most crimes never result in arrest (FBI, 1996), the purely statistical odds are in favor of a rational choice to commit any given crime. The question of why even more people do not commit crime is therefore central to criminology, and has driven many theories (Hirschi, 1969; Cohen and Felson, 1979; Gottfredson and Hirschi, 1990). The extent to which law enforcement can affect the perception of those odds is a matter of great debate (Blumstein, Cohen and Nagin, 1978), as is the question of whether even a low risk of punishment is too high for most people. Yet there is widespread agreement that the institutions of family and community are critically important to crime prevention.

That agreement breaks down when the institutions of family and community themselves appear to break down, creating a vacuum of informal social control that government is then invited to fill up (Black, 1976). Whether police, courts and prisons can fill the gap left by weak families and socially marginal communities is a question subject to debate in both politics and social science. But it may be the wrong question to ask, at least initially. The premise of the question is that the breakdown of the basic institutions of crime prevention is inevitable. Yet for over a century, a wide range of programs has attempted to challenge that premise. Entirely new institutions, from public schools to social work to the police themselves (Lane, 1992), have been invented to provide structural support to families and communities. In recent years, the federal government has attempted a wide range of programs to assist those efforts. Rather than simply assuming their failure, it seems wiser to start by taking stock of their efforts.

Settings, Practices and Programs

Crime prevention is a result of everyday practices concentrated in seven institutional settings. A "setting" is a social stage for playing out various roles, such as parent, child, neighbor, employer, teacher, and church leader. There are many ways to define these settings, and their boundaries are necessarily somewhat arbitrary. Yet much of the crime prevention literature fits quite neatly into seven major institutional settings: 1) communities, 2) families, 3) schools, 4) labor markets, 5) places, 6) police agencies and 7) the other agencies of criminal justice. The definitions of these settings for crime prevention are quite broad, and sometimes they overlap. But as a framework for organizing research findings on crime prevention effectiveness, we find them quite workable.

Crime prevention research examines two basic types of efforts in these seven settings. One type is a "practice," defined as an ongoing routine activity that is well established in that setting, even if it is far from universal. Most parents make children come home at night, most schools have established starting times, most stores try to catch shoplifters, most police departments answer 911 emergency calls. Some of these practices have been tested for their effects on crime prevention. Most have not. Some of them (such as police patrols and school teacher salaries) are funded in part by federal programs. Most are not. Regardless of the source of funding, we define a practice as something that may change naturally over time, but which would continue in the absence of specific new government policies to change or restrict them.

A "program," in contrast, is a focused effort to change, restrict or create a routine practice in a crime prevention setting. Many, but far from all, programs are federally funded. Churches may adopt programs to discourage parents from spanking children, or letting children watch violent television shows and movies. Universities may adopt programs to escort students from the library to their cars in the hours after midnight. Shopping malls may ban juveniles unescorted by their parents on weekend evenings, and police may initiate programs to enforce long-ignored curfew or truancy laws. In time, some programs may turn into practices, with few people remembering the time before the program was introduced.

Perhaps the clearest distinction between programs and practices is found among those programs requiring additional resources. The disciplinary practices of parents, for example, and the hiring practices of employers are largely independent of tax dollars. But calling battered women to notify them of their assailant's imminent release from prison may be a practice that only a federally funded program can both start and keep going. Even police enforcement of laws against drunk driving, in recent years, seems to depend almost entirely on federally funded overtime money to sustain (Ross, 1994). Whether these federal resources are "required" is of course a matter of local funding decisions. But in many jurisdictions, many practices begun under federal programs might die out in the absence of continued funding.

These distinctions are important to crime prevention for reasons of evidence: newly-funded programs are more likely to be subjected to scientific ev