When Trump Budget Director Mick Mulvaney stepped out to defend the Trump budget’s $52 billion in domestic spending cuts, he used the language of evidence. The $3 billion flexible grant for community development was “just not showing any results.” The $2 billion federal program to fund afterschool lacked “demonstrable evidence.” Both were slated for elimination.

Critics responded by disputing the facts. They showed that one popular program funded with community development dollars, Meals on Wheels, does have good evidence of improving seniors’ health.

But there is one sense in which Mulvaney was right: there are no rigorous evaluations showing that the two large programs he attacked make a positive impact on their recipients.

Does that justify the cuts? Not even close. In fact, by invoking “evidence” in selective attacks on two among dozens of programs his budget would eviscerate, Mulvaney misses what matters most about how policymaking works – not just evidence-based policymaking, but policy leadership in general.

The two of us have worked for years from both sides of the political aisle to advance the cause of evidence-based policy: rigorously evaluating whether programs work, then using those results to improve programs, to expand them, or when necessary, to cut them off. The most rigorous forms of evaluation, randomized controlled trials, have proved themselves as the cornerstone of American medical research, and today they play an increasing role in evaluating the effectiveness of government programs and changing policies based on the results.

But nothing is worse for the cause of evidence-based policy than selectively citing research to justify destructive wholesale cuts. If we know anything from years working on evidence-based policy, we know that evidence can only go so far. The art of governing means setting priorities for what is worth trying to fix, not simply cutting because we can’t be sure what works. Or to put it differently, governing requires not just sound evidence, but also sound values. Mulvaney’s tactics of using uncertainty and cherrypicking studies threaten the entire endeavor of evidence-based policy.

Consider in turn the two initiatives that Mulvaney attacked.

The first is the Community Development Block Grant program, one funding source for Meals on Wheels. CDBG by design is a flexible source of funding for mayors. The money goes out by a complicated formula that is meant to capture both population and need. For more than 1,200 governments, CDBG funds construction projects, business development, housing rehab, weatherization, and yes, Meals-on-Wheels. This flexibility minimizes red tape and promotes local control. It also means some governments use the money well and some don’t.

For a formula-based, flexible program like CDBG, you can measure many things: the quality of the targeting, the cost per outcome, the level of fraud. In CDBG, we could improve the targeting, but after 40 years of technocratic reforms by quietly competent people, the program is reasonably efficient.

One thing you can’t measure is whether CDBG as a whole is effective at improving lives. Individual initiatives funded by CDBG, like Meals on Wheels, can be evaluated. But the uses of the entire $3 billion are so diverse that evaluating the program as a whole is a fool’s errand.

To evaluate CDBG as a whole, you’d need to compare outcomes for those getting grants and outcomes for those not getting grants. At the level of the cities receiving funds, not only do most cities get grants, but the grants themselves are too small to transform cities’ outcomes. At the level of funded projects, it’s difficult to identify a comparison group of programs not getting CDBG funding. Even if you could, outcomes are so diverse that it is impossible to consistently measure impacts.

CDBG’s open-ended, formulaic design is a good reason to favor a different programmatic approach. For example, you could argue that grants should be competitive, rather than formula-based, and should be conditioned on systemic change. Actually, President Obama argued for just these types of changes, keying off of the “Race to the Top” in education. He budgeted new urban development money for competitive programs like Choice Neighborhoods, not CDBG. Conservative Republicans like Mick Mulvaney—joined by Democratic constituencies like teachers’ unions and urban mayors—opposed these efforts as government overreach. They mostly got their way. So here we are.

As long as we have a program like CDBG, the question whether it “shows results” will not get a satisfying answer. The real question is whether we as a society want to give money to support community improvement by local governments in areas facing hardship. For 40 years, we have answered yes to that question. If President Trump has a different answer, he should make a forthright argument about priorities. And given his Administration’s support for tax cuts, he should explain why those tax cuts are more important. That is a question of society’s values—not one that “evidence” can answer.

Afterschool programs teach a different lesson. Since the end of the Clinton Administration, the federal government has provided a big dedicated funding stream called 21st Century Community Learning Centers to support afterschool programs. Unlike CDBG, this program funds discrete activities with clearly defined goals. There was a large, rigorous study of that program by the respected firm Mathematica Policy Research, which found that the program had no positive effect on student outcomes.

That study is a real strike against afterschool programs, but there are two caveats. The students in the study were enrolled in programs 15 years ago. A lot has changed since. In part in response to the research, afterschool programs have already increased their academic focus. And the federal authorizing law has changed too. An old study of a different program can only bear so much weight. Some studies of more recent local initiatives have had positive results.

More fundamentally, whatever studies say about the effects of afterschool programs on students, any working parent will tell you that these programs have a broader purpose: to provide a reliable source of childcare. If this is one goal, then evaluations of student outcomes can’t measure the program’s full value.

Some thoughtful critics have argued that we could better serve both students and parents by shifting resources from discrete afterschool programs and into regular schools to extend the school day. There is solid evidence about the power of the extended school day to increase student achievement. Shifting resources toward that approach is indeed an evidence-based policy.

An analogy may help here: Medical research can tell us how to treat Alzheimer’s disease or cancer. It can’t tell us which research deserves more support, or how much. In the same way, no policy evaluation can tell us the right amount to invest in the endeavors of helping kids learn or helping parents manage their lives. And hence there is no evaluation that tells us to cut down on these efforts. And yet cut these efforts is precisely what the Trump budget does. It reduces education spending by $9 billion at the same time as the president is supporting tax cuts. That is a reflection of values. Program evaluation is of no relevance. We can judge that decision as human beings.

Using evaluation to choose among policy approaches is painstaking work that does not fit into sound bites. To borrow from Max Weber, evidence-based policy is the “slow boring of hard boards.” Both the second Bush Administration and the Obama Administration made evaluation investments that are slowly paying off in better efforts to support vulnerable parents, to structure effective schools, and to stop teen pregnancy. The Trump budget threatens to cut rigorous and long overdue evaluations of the nation’s social programs just when they are receiving bipartisan support. And the Trump Administration’s slipshod use of evidence to support its severe cuts threatens to sour the public on the entire endeavor of evidence-based policy making. These would be subtle but lasting defeats.

Robert Gordon is a senior fellow at Results for America. He served as acting deputy director and led evidence-based policy initiatives for the Office of Management and Budget during the Obama Administration. Ron Haskins is a senior fellow at the Brookings Institution. He served as a senior advisor to President George W. Bush for welfare policy and was appointed by Speaker Paul Ryan to co-chair the Evidence-Based Policymaking Commission.

Authors: