Tags

Because my courses focus on public policy, I often discuss benefit-cost analyses (BCA) in them. While little discussed in public, the central idea is simply to identify and include all the relevant benefits and costs of a decision, do our best to estimate their values, then choose the option that provides the greatest net benefits. Hardly a radical idea. It can be useful in disciplining our thinking to be more consistent. Benjamin Franklin employed a version of it in making some of his decisions.

The problem is that in disciplining our thinking and identifying the logical principles to be applied to make better decisions, BCA also teaches those determined to mislead others how to do that better, by showing them how to be wrong in the “right” direction to make their positions appear stronger than they are.

In fact, BCA may be more helpful to such motivated mistakes than to appropriate application. The term itself suggests that it is difficult, technically complicated, and uninteresting, so the prospect of work and boredom deters careful thinking by those not specialists in the field. And that is reinforced by the imagery that those doing such analyses are doing so as dispassionate scientists, so that their conclusions can be trusted. Very few people, as a result, pay enough attention to act as an effective constraint on abuses of the technique.

That is why I have extended my BCA discussions to include how to cheat on the correct principles; n`ot so that they can cheat better (in fact, I threaten them with signing a “superhero oath” on their final exams that promises they will only use what they know for good), but so they can detect others’ cheats better.

Overstating Benefits, Understating Costs

Our discussion starts with the correct principles on what should and should not be included (and why) and how their magnitudes might be appropriately estimated. But then we take a diversion from that logic to the question of the incentives of those doing analyses for public consumption. Those who are trying to “sell” a policy will be tempted to overstate benefits and understate costs, while their opponents will be tempted to overstate costs and understate benefits. Knowing in which directions someone will have an incentive to cheat, then, tells us which red flags to look for in evaluating their claims. Such red flags are particularly useful because those misusing BCA in their preferred direction either don’t know enough to justify trusting their analysis, or q are intentionally misleading you, so you can’t trust their analysis.

How could someone promoting a policy overstate benefits? One common way is to count income that is transferred from one area to another, and so not really a benefit, as if it created new income. People can also pile on by double counting, as when people pitch projects as creating both income and jobs as if both were benefits (or, similarly, counting increases in productivity or views and also higher property values, which just capitalize those benefits), when in fact jobs are the cost one must bear to receive a reliable income, not a separate benefit.

Then there are multiplier effects supposedly triggered by government spending (more income creating more purchases, creating more income, etc.), but the symmetrical effects of raising funds, which go in the opposite direction, are not. And there are others, including counting nonexistent benefits and cheats for specific scenarios, like how to “create” higher ridership forecasts and accelerate estimated completion dates for high-speed rail.

How could such a person understate costs? One popular way is simply to ignore some costs, such as treating resources already owned by the government as free, since they don’t have to pay more money for them, ignoring that those same resources could have generated value used otherwise or sold to the private sector. Similarly, currently unemployed resources can be treated as free, even though they are not, as they could be productively employed elsewhere. An example of a variant of this is that mass transit system cost estimates might simply ignore the cost of policing that will be required by assuming the regular police department will provide the added services (implicitly at no added cost). Regulatory policies might also ignore the costs imposed on private owners, as with the Endangered Species Act and rent control, since such losses to owners need not be compensated by the government.

Another popular trick is to understate the relevant interest rate for financing a project, to decrease the cost estimate for the project (as well as increasing the discounted value of the benefits), Further, almost ubiquitously ignored is the wealth (mutual benefits) destroyed by the voluntary trades taxes to finance a project wiped out (which economists call the welfare cost or excess burden), over and above the tax revenues that go to government, which are often very large. And there are more.

This approach provides some valuable tools for self-defense against policy malfeasance. But it is not complete, because one of the few areas that government seems to display real creativity in is generating new ways to cheat honest evaluations of what they want to do.

The Medicaid Expansion Bait and Switch

A good but almost unknown multi-billion-dollar illustration of misleading BCA has been the Medicaid expansion that the Obama administration incorporated into the Affordable Care Act (ACA). It created a new group of eligible recipients—people not qualifying under other criteria but earning less than 138 percent of the federal poverty level.

The extension was designed to increase the number of people officially counted as being insured (Obama’s signature achievement), but it had to overcome resistance to the expansion from many states. So the federal government offered to pay 100 percent of the costs for the newly eligible recipients from 2014–16, tapering down to 90 percent in 2020 and beyond, rather than the 50–75 percent that it pays for those previously eligible. This free money brought (better, bought) many states who objected to the program into it, and it increased enrollment. But it also gave states virtually no incentive to monitor enrollees to make sure they were eligible (what state wants to spend money on enforcement in order to cut the benefits received by its citizens?). In fact, it also gave the states incentives to miscategorize those who were already eligible under other criteria in order to increase their federal match and save themselves money.

But that design leads to one particularly big question: given the obvious incentive that the ACA’s Medicaid expansion gave states to approve as many people as possible, whether they in fact met the criteria or not, good policy design requires monitoring for such misbehavior. That was particularly important since new enrollment in Medicaid was far higher than anticipated (in California, almost four times higher).

But what the Obama administration did was quite different. The federal government had been auditing Medicaid states on a three-year cycle (one-third of the states each year) to investigate such issues. Such audits would reveal questionable implementation issues, such as enrolling those who did not meet the actual program requirements, erroneously recategorizing recipients to increase federal matching funds, or failing to record enough information to determine whether someone who was enrolled was ineligible. But for fiscal years 2014–17 it canceled its program audits. The administration stopped generating the information that would reveal problems. They had come upon yet another way of hiding the real costs of a program.

Fortunately, we now have some idea (though far too late) of how large the cheats have been thanks to the resumption of program audits in 2019 under the Trump administration and the research of Brian Blase and Aaron Yelowitz. After incorporating the changes in only one-third of the states in 2019, the Centers for Medicare and Medicaid Services (CMS) already estimated a “national improper payment rate” of over $57 billion, almost 15 percent of federal Medicaid spending. That was up from over $36 billion (or just under 10 percent) in their 2018 estimate, and did not adjust for any of the changes since the Medicaid extension. Updating just one-third of the states raised the improper payment 5 percentage points, leading Blase and Yelowitz to conclude that the real improper payment was over 20 percent, roughly $75 billion. They also provide extensive information to supplement their evidence and estimates.

If the CMS's and Blase and Yelowitz's results are anywhere close to correct, the effect of the ACA’s Medicaid expansion was to increase improper payments from about 10 percent to 20 percent, or from $36 billion to $75 billion—i.e., divert about $40 billion dollars in federal taxpayer money annually away from those whom the program was supposedly limited to, while effectively hiding it from almost any public recognition. That may, in fact, be a record for the largest misrepresentation of any single cheat, in dollars, to lowball the costs that I can think of.

I know a lot about Benefit-Cost Analysis. But the ACA Medicaid expansion has reminded me that even knowing all of the many ways in which government has adulterated their policy conclusions from what logical principles demand in the past would not be enough to insure against being conned with massive “new and improved” misrepresentations. And that is a lesson every American would benefit from knowing.