As someone who leans utilitarian, I often find myself saying the following: “I support Program P in principle but not in practice.” Paternalistic policies, i.e., policies that interfere with people’s choices for their own good, are good examples. I suspect that I’m more comfortable, at least in principle, with some forms of paternalism than most readers. But I’m against paternalism in practice. In any event, the case of paternalism holds valuable lessons for how to theorize about institutions in general.

One view, sometimes called hard paternalism, alleges that the state can interfere with people’s decisions even when those decisions are known to be voluntary, clear-headed, informed, etc. For instance, a hard paternalist might think that the state should force motorcyclists to wear helmets even when they make an informed decision not to. Soft paternalism alleges that interference is justified only when the person’s decision to perform the self-harming action is not informed, voluntary, etc. So a soft paternalist might force a motorcyclist to wear a helmet when the motorcyclist isn’t aware of the risks of riding without one.

I’m against hard paternalism in principle. I don’t see why it’s an error for motorcyclists to prefer the feeling of the wind in their hair to the safety provided by a motorcycle helmet. (After all, no one proposes making it illegal to drive to the movie theater rather than watch a movie at home even though getting on the road dramatically increases your risk of harming yourself for the sake of a fairly trivial pleasure.)

But I confess that I’m okay with soft paternalism in principle—and I’ll bet you are too. Consider what’s likely the most famous example of soft paternalism, John Stuart Mill’s bridge case:

If either a public officer or any one else saw a person attempting to cross a bridge which had been ascertained to be unsafe, and there were no time to warn him of his danger, they might seize him and turn him back, without any real infringement of his liberty; for liberty consists in doing what one desires, and he does not desire to fall into the river. (On Liberty, Chapter 5)

As Mill notes, this coercion actually helps the coerced do what he desires—namely, avoid falling into the river. I say that the public officer’s coercion is justified. Ask yourself: if you were the bystander in this case, would you forcibly prevent the pedestrian from crossing the bridge? And if you were the pedestrian, would you want to be forcibly prevented from crossing the bridge? I’m guessing that the answer to both of these questions is “yes.” If I’m right, then you’re a soft paternalist in principle.

But what about real-world institutional practice? Mill opposed a paternalist state partly on the grounds that (1) you probably know your interests better than the paternalistic regulator knows your interests and (2) you probably have a stronger incentive to take care of yourself than the paternalistic regulator. There are surely exceptions to these general rules (as in the bridge case) but the state has to govern on the basis of rules rather than exceptions (e.g., we don’t let exceptionally competent 14 year olds acquire learner’s permits).

Recently, though, the idea that people are pretty good at knowing their interests has been challenged by experimental psychology and behavioral economics. This article by Georgetown Law Professor David Cole does a nice job of explaining the case for the new paternalism advocated most famously by Cass Sunstein:

Mill’s case against paternalism is undermined, Sunstein says, by man’s propensity to err and sabotage his own interests. If we know that people make predictable mistakes, then paternalistic interventions designed to mitigate those mistakes may increase people’s welfare overall. Even when our actions harm no one else, government intervention may therefore be justified.

We have lots of information about human psychology and behavior that Mill didn’t have, information that should make us less confident in people’s ability to competently pursue their own interests.

So should those of us who support soft paternalism in principle be persuaded to support a paternalist state in practice? Not yet. Over 40 years ago economist Harold Demsetz wrote that “the view that now pervades much public policy economics implicitly presents the relevant choice as between an ideal norm and an existing ‘imperfect’ institutional arrangement. This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements” (Information and Efficiency: Another Viewpoint, page 1). To make the case for paternalism, it’s not enough to show that people are bad at pursuing their own interests; it must be shown that paternalistic regulators are better.

In his article Cole asks the right question: “If cognitive biases cause private individuals to err in making decisions, won’t public officials be prone to similar errors?” Indeed, there is good reason to think that public officials (not to mention voters) will be far worse at avoiding errors: they have less incentive to avoid them. If I make an erroneous decision about my health, I suffer the costs. If a public official makes an erroneous decision about my health, I’m still the one that suffers the costs.

And it gets even worse once we add some general public choice worries into the mix. Suppose, for the sake of argument, that a ban on raw milk produced by small farms delivers concentrated benefits for a well-organized and well-represented dairy lobby and dispersed costs for the rest of us. Even if the evidence indicates that raw milk is safe (and I have no idea if this is the case), why think that the evidence rather than political expediency will sway the public official?

Here’s the more general point. When it comes to institutional design, there is never a perfect option. Private citizens make bad decisions and public officials make bad decisions. Markets fail and governments fail. So we should take Demsetz’s comparative institution approach: choose the least imperfect of our imperfect options.