I.

Like most right-thinking people, I’d always found Immanuel Kant kind of silly. He was the standard-bearer for naive deontology, the “rules are rules, so follow them even if they ruin everything” of moral philosophy.

But lately, I’ve been starting to pick up a different view. There may have been some subtleties I was missing, almost as if one of the most universally revered thinkers of the western philosophical tradition wasn’t a total moron.

I was delighted to see nydwracu say something similar in the comments to my recent post:

I [now] realize that Kant is not actually completely ridiculous like I once thought he was

I don’t know if it’s just that nydwracu and I have been thinking about some of the same problems lately, but he took the words right out of my mouth.

I’m not a Kant scholar. I’m not qualified to explain what Kant thought, and it’s possible the arguments I express as Kantian here are going to be arguments of a totally different person who merely reminds me of Kant in some ways. James Donald’s objections to steelmanning are well taken, so I will not call this a steel man of a guy who is too dead to correct me if I am wrong. At best I will call this post Kant-aligned.

First, I want to take another look at one of Kant’s most-reviled arguments: that you should truthfully tell a murderer who wants to kill your friend where she is hiding.

Second, I want to talk about how I find myself using Kantian principles in my own morality.

And third, I want to talk about big unanswered questions and the reason this still isn’t technical enough for me to be comfortable with.

II.

Kant gives the following dilemma. Suppose that an axe murderer comes to your door and demands you tell him where your friend is, so that he can kill her. Your friend in fact is in your basement. You lie and tell the murderer your friend is in the next town over. He heads off to the next town, and while he’s gone you call the police and bring your friend to safety.

Most people would say that the lie is justified. Kant says it isn’t, because lying.

I think most people understand his argument as follows: you think “I should lie”. But suppose everyone thought that all the time. Then everyone would lie to everyone else, and that would be horrible.

But Kant’s categorical imperative doesn’t urge us to reject actions which, if universalized, would be horrible. That’s rule utilitarianism, sort of. Kant urges us to reject actions which, if universalized, would be self-defeating or contradictory.

Suppose it was everyone’s policy to lie to axe murderers who asked them where their friends were. Well, then axe murderers wouldn’t even bother asking.

Which doesn’t sound like a sufficiently terrible dystopia to move us very much. So let me reframe Kant’s example.

Suppose you are a prisoner of war. Your captors tell you they want to kill your general, a brilliant leader who has led your side to victory after victory. They have two options. First, a surgical strike against her secret headquarters, killing her and no one else. Second, nuking your capital city. They would prefer to do the first, because they’re not monsters. But if they have to nuke your capital, they’ll nuke your capital. So they show you a map of your capital city and say “Please point out your general’s headquarters and we’ll surgical-strike it. But if you don’t, we’ll nuke the whole city.”

You decide to lie. You point to a warehouse you know to be abandoned. Your captors send a cruise missile that blows up the warehouse, killing nobody. Then they hold a huge party to celebrate the death of the general. Meanwhile, the real general realizes she’s in danger and flees to an underground shelter. With her brilliant tactics, your side wins the war and you are eventually rescued.

So what about now? Was your lie ethical?

Kant would point out that if it was known to be everyone’s policy to lie about generals’ locations, your captors wouldn’t even ask. They’d just nuke the city, killing everyone.

Your captors are offering you a positive-sum bargain: “Normally, we would nuke your capital. But you don’t want that and we don’t want that. So let’s make a deal where you tell us where your general is and we only kill that one person. That leaves both of us better off.”

If it is known to everyone that prisoners of war always lie in this situation, it would be impossible to offer the positive-sum bargain, and your enemies would resort to nuking the whole city, which is worse for both of you.

So when Kant says not to act on maxims that would be self-defeating if universalized, what he means is “Don’t do things that undermine the possibility to offer positive-sum bargains.”

This is very reminiscent of Parfit’s Hitchhiker. Remember that one? You are lost in the desert, about to die. A very selfish man drives by in his dune buggy, sees you, and offers to take you back to civilization for $100. You don’t have any money on you, but you promise to pay him $100 once you’re back to civilization and its many ATMs. The very selfish man agrees and drives you to safety. Once you’re safe, you say “See you later, sucker!” and run off.

The selfish man’s “I’ll bring you back to civilization for $100” offer is a positive-sum bargain. You would rather lose $100 than die. He would rather gain $100 and lose a few hours bringing you to the city than continue on his way. So you both gain.

But if everyone were omniscient and knew that people who promise $100 will never really pay, or if your decision not to pay could somehow affect his willingness to make you the offer in the first place, the ability to make the positive-sum bargain disappears.

On this model, Kant isn’t being a weird super-anal stickler for meaningless rules at all. He’s being the most practical person around: don’t do things that spoil people’s ability to make a profit.

(and sort of pre-inventing decision theory)

(man, it’s a good thing everyone is omniscient and the future can cause the past, or else we’d never be able to ground morality at all)

III.

A while back I suggested it is wrong to fire someone for being anti-gay, because if every boss said “I will fire my employees whom I disagree with politically”, or every mob of angry people said “We will boycott companies until they fire the people we disagree with politically” then no one who’s not independently wealthy could express any political opinions or dare challenge the status quo, and the world would be a much sadder place.

This is not strictly Kantian. “The world would be a much sadder place” is not self-defeating or a contradiction.

But it could still be framed as a positive-sum bargain. In a world where all the leftists refused to hire rightists, and all the rightists refused to hire leftists, everything would be about the same except that everyone’s job opportunities would be cut in half. If the people in such a world were halfway rational, they would make a deal that rightists agree to hire leftists if leftists agree to hire rightists. This would clearly be positive-sum.

This is easy to say in natural language like this. But when you try to make it more formal it gets really sketchy real quick.

Let’s say Paula the Policewoman is arresting Robby the Robber (she caught him by noticing his name was Robby in a world where everyone’s name sounds like their most salient characteristic). No doubt she thinks she is following the maxim “Police officers should arrest robbers”. But what about other maxims that lead to the same action?

1. Police officers should arrest people

2. Everyone should arrest robbers

3. Paula should arrest Robby

4. Paula should arrest other people

5. Everyone should arrest Robby

6. Everyone should arrest EVERYONE ELSE IN THE WORLD

This sounds kind of silly in this context, but in more complicated situations the entire point hinges upon it.

Levi the Leftist, who owns a restaurant called Levi’s Lentils, finds out that his head waiter, Riley the Rightist, is a homophobe (in Levi’s defense, he thought he was safe to hire him because his name wasn’t Homer). He fires Riley, who ends out on the street.

Candice the Kantian condemns him, saying “What if that were to become a general rule? Then nothing would change except everyone only has half as many job opportunities.”

Levi says “Oh, I see your problem. You think my maxim is ‘fire people with different politics than me’. But that’s not my maxim at all. My maxim is ‘fire people who are homophobic’. If that becomes universalized, it will be a great victory for gay people everywhere, but no one whose politics I agree with will suffer at all.”

In fact, Levi might claim his maxim is any one of the following:

1. Everyone should fire people they disagree with politically

2. Everyone should fire people who are politically on the right

3. Everyone should fire people who discriminate against minority groups

4. Everyone should fire people who are homophobic

5. Everyone should fire people who are mean and hateful

6. Everyone should fire people who hold positions that are totally beyond the pale and can’t possibly be supported rationally

(before I get yelled at in the comment section, I’m not necessarily claiming all these maxims accurately describe Riley, just that Levi might think they do)

(5) runs into this problem where you can never say “fire people who are mean and hateful” without it in fact meaning “fire people whom you think are mean and hateful”. Presumably all the rightist bosses will find good reasons to think their leftist employees are mean and hateful.

There seems to be some sense in which we also want to protest (2), say that if Levi is allowed to use (2), then that instantly morphs to rightist bosses being allowed to say “everyone should fire people who are politically on the left”. But just saying “universalizability!” doesn’t automatically let us do that.

(3) seems even sneakier. It is in fact the maxim promoted by the people who are actually doing the firing, since they seem to have some inkling that universalizability and “fairness” are important. And it sounds totally value-neutral and universalizable. And yet I feel like if we allow Levi to say this, then some rightist will say actually his maxim is “everyone should fire people who want to undermine traditional cultural institutions”, and the end result will be the same old “job opportunities halved for everyone”.

IV.

This is a hard problem. The best solution I can think of right now is to go up a meta-level, to say “universalize as if the process you use to universalize would itself become universal”.

Suppose I am very greedy, and I lie and steal and cheat to get money. I say “Well, my principle is to always do whatever gets Scott the most money”. This sooooooorta checks out. If it were universalized – and everyone acted on the principle “Always do whatever gets Scott the most money”, well, I wouldn’t mind that at all.

But if we say “universalize as if the process you use to universalize would itself become universal”, then we assume that if I try to universalize to “do what gets Scott the most money”, then Paula will universalize to “do what gets Paula the most money” and Levi will universalize to “do what gets Levi the most money” and we’ll all be lying and cheating and stealing from one another and no one will be very happy at all.

(Kant notes that this also satisfies his original, stricter “self-defeating contradiction” criterion. If we all try to steal from each other, then private property becomes impossible, the economy collapses, and the stuff we want isn’t there to steal. I don’t know if I like this; it seems a little forced. But even if contradictoriness is forced, badness seems incontravertible)

As for Levi, he knows that if he universalizes to “everyone should fire people who discriminate against minority groups”, his process is “pick out a political value that’s important to me and excludes a lot of potential employees, then say everyone should fire people who disagree with it”. This is sufficient to assume rightists will do the same and we’ll be back at half-as-many-jobs.

Next problem. Suppose I am a very rich and very selfish tycoon. I say “No one should worry about helping the needy”. I am perfectly happy with this being universalized, because it saves me from having to waste my time helping the needy. Although other people also won’t help the needy, I’m a super-rich tycoon and that’s no skin off my back.

We can climb part of the way out of this pit with meta-universalizability. We say “If I say things like this, everyone will only act on maxims that benefit them personally and appeal to their own idiosyncratic characteristics, rather than the ones that most benefit everyone.”

But I worry that this isn’t enough. Suppose I’m not just a tycoon, I’m a super-rich and powerful tyrannical king. I come up with maxims like “Everyone do what the tyrant says or be killed!” Candice the Kantian warns “If you do that, everyone will come up with maxims that benefit them personally, and the moral law will be weakened.”

And so I kill Candice for disagreeing with me.

If you are so much stronger than other people that you are immune to their counter-threats, you can get away with doing pretty much anything under this perversion of not-at-all-like-Kant we’ve wandered into.

We might have gotten so far from Kant at this point that we’ve stumbled into Rawls. Put up a veil of ignorance and the problem vanishes.

V.

What about utiltarianism?

I would love to universalize the maxim “Do whatever most increases Scott’s utility”.

Given concerns of meta-univeralizability above, I might end up instead wanting to universalize “Do whatever most increases global utility”.

This seems certain, maybe even provable, if you throw in the veil of ignorance accessory.

Utilitarianism has a lot of the same problems universalizability does. A very stupid utilitarian would automatically condemn Levi for firing Riley since now Riley is unemployed and this lowers his utility. More sophisticated utilitarians would have to take into account the various society-wide effects of Levi setting a precedent here. I think that’s what Mill’s rule utilitarianism tries to do and what precedent utilitarianism tries to do as well. The problem is that it’s really hard to figure out what rules and precedents have how much weight. Universalizability kind of plows through some of those objections like a giant steamroller. It probably prevents a couple of little incidents where you could steal something or kill someone to gain a little extra utility, but it more than makes up for it in vastly increasing social trust and ability for positive-sum deals.

I’m not sure whether consequentialism is prior to universalizability (“universalize maxims because if you don’t you’ll end up losing out on possible positive-sum games and cutting your job offers in half”), whether universalizability is prior to consequentialism (“be a consequentialist, because that is a maxim everyone could agree on”), or whether they’re like a weird ouroboros constantly eating itself.

I think maybe the idea I like best is that consequentialism is prior to universalizability is prior to any particular version of utilitarianism.

Because if universalizability is prior, that would be an interesting way to explore some of the problems with utilitarianism. For example, should we count pleasure or preferences? I don’t know. Let’s see what everyone would agree on.

Does everyone have to donate all of their money to the most efficient charity all the time? Well, if you were behind the veil of ignorance helping frame the moral law, would you put that in?

Does everyone have to prefer torture to dust specks? You’re behind the veil of ignorance, you don’t know if you’ll be a dust speck person or a torture person, what do you think?

I think this is a good point to remember the blog tagline and admit I am still confused, but on a higher level and about more important things.