[Attempt to derive morality from first principles, totally ignoring that this should be impossible. Based on economics and game theory, both of which I have only a minimal understanding of. And mixes complicated chains of argument with poetry without warning. So, basically, it’s philosophy. And it’s philosophy I get the feeling David Gauthier may have already done much better, but I haven’t read him yet and wanted to get this down first to avoid bias towards consensus]

Related to: Whose Utilitarianism?, You Kant Dismiss Universalizability, Meditations on Moloch

Imagine the Economists’ Paradise.

In the Economists’ Paradise, all transactions are voluntary and honest. All game-theoretic problems are solved. All Pareto improvements get made. All Kaldor-Hicks improvements get converted into Pareto improvements by distributing appropriate compensation, and then get made. In all cases where people could gain by cooperating, they cooperate. In all tragedies of the commons, everyone agrees to share the commons according to some reasonable plan. Nobody uses force, everyone keeps their agreements. Multipolar traps turn to gardens, Moloch is defeated for all time.

The Economists’ Paradise is stronger than the Libertarians’ Paradise, which is just a place where no one initiates force and all economic transactions are legal, because the Libertarians’ Paradise might still have a bunch of Prisoner’s Dilemmas and the Economists’ Paradise wouldn’t. But it is weaker than Utilitarians’ Paradise, because people with more power and money still get more of the eventual utility.

From a god’s-eye view, it seems relatively easy to create the Economists’ Paradise. It might be hard to figure out how to solve game theoretic problems in absolutely ideal ways, but it’s often very easy to figure out how to solve them in a much better way than the uncoordinated participants are doing right now (see the beginning of Part III of Meditations on Moloch). At the extreme of this way of thinking, we have Formalism, where just solving the problem, even in a very silly way, is still better then having the question remain open.

(a coin flip is the epitome of unintelligent problem solving, but flipping a coin to decide whether the Senkaku/Diaoyu Islands go to Japan or China still beats having World War III, by a large margin)

The Economists’ Paradise is a pretty big step of the way toward actual paradise. Certainly there won’t be any wars or crime. But can we get more ambitious?

Will the Economists’ Paradise solve world hunger? I say it will. The argument is essentially the one in Part 2.4 of the Non-Libertarian FAQ. Suppose solving world hunger costs $50 billion per year, which I think is people’s actual best-guess estimate. And suppose that half the one billion people in the First World are willing to make some minimal contribution to solving world hunger. If each of those people can contribute $2 per week, that suffices to raise the necessary amount. On the other hand, the $50 billion cost is the cost in our world. In the Economists’ Paradise, where there are no corrupt warlords or bribe-seeking bureaucrats, and where we can just trust people to line themselves up in order of neediest to least needy, the whole task gets that much easier. In fact, it’s not obvious that the First World wouldn’t come up with their $50 billion only to have the Third World say “Thanks, but we kind of sorted out our problems and became an economic powerhouse.”

Let’s get more ambitious. Will there be bullying in the Economists’ Paradise? I just mean your basic bullying, walking over to someone who’s ugly and saying “You’re ugly, you ugly ugly person!” I say there won’t be. How would a perfect solution to all coordination problems end bullying? Simple! If the majority of the population disagrees with bullying, they can sign an agreement among themselves not to bully, and to ostracize anyone who does. Everyone will of course keep their agreement (by the definition of Economists’ Paradise) and anyone who reports to the collective that Bob is a bully will always be telling the truth (by the definition of Economists’ Paradise). The collective will therefore ostracize Bob, and faced with the prospect of never being able to interact with the majority of human beings ever again, Bob will apologize and sign an agreement never to bully again (which he will keep, by the definition of Economists’ Paradise). Since everyone knows this will happen, no one bullies in the first place.

So the Economists’ Paradise is actually a very big step of the way toward actual paradise, to the point where the differences start to look like splitting hairs.

The difference between us and the Economists’ Paradise isn’t increased wealth or fancy technology or immortality. It’s rule-following. If God were to tell everybody the rules they needed to follow to create the Economists’ Paradise, and everyone were to follow them, that would suffice to create it.

That suggests two problems with setting up Economists’ Paradise. We need to know what the rules are, and we need to convince people to follow them.

These are more closely linked than one would think. For example, both Japan and China might prefer that the Senkaku Islands be clearly given to the other according to a fair set of rules which might benefit themselves the next time, than that they fight World War III over the issue. So if the rules existed, people might follow them for the very reason that they exist. This is why, despite the Senkaku Island conflict, most islands are not the object of international tension – because there are clear rules about who should have them and everybody prefers following the rules to the sorts of conflicts that would happen if the rules didn’t exist.

II.

There’s a hilarious tactic one can use to defend consequentialism. Someone says “Consequentialism must be wrong, because if we acted in a consequentialist manner, it would cause Horrible Thing X.” Maybe X is half the population enslaving the other half, or everyone wireheading, or people being murdered for their organs. You answer “Is Horrible Thing X good?” They say “Of course not!”. You answer “Then good consequentialists wouldn’t act in such a way as to cause it, would they?”

In the same spirit: should the State legislate morality?

“Of course not! I don’t want the State telling me whom I can and can’t sleep with.”

So do you believe that it’s immoral, genuinely immoral, to sleep with the people whom you want to sleep with? Do you think sleeping with people is morally wrong?

“What? No! Of course not!”

Then the State legislating morality isn’t going to restrict whom you can sleep with, is it?

“But if the State legislated everything, I would have no freedom left!”

Is taking away all your freedom moral?

“No!”

Then the State’s not going to do that, is it?

By this sort of argument, it seems to me like there are no good philosophical objections to a perfect State legislating the correct morality. Indeed, this seems like an ideal situation; the good are rewarded, the wicked punished, and society behaves in a perfectly moral way (whatever that is).

The arguments against the State legislating morality are in my opinion entirely contigent ones, based around the fact that the State isn’t perfect and the correct morality isn’t known with certainty. Get rid of these caveats, and moral law and state law would be one and the same.

Letting the State enforce moral laws has some really big advantages. It means the rules are publicly known (you can look them up in a lawbook somewhere) and effectively enforced (by scary men with guns). This is great.

But using the State to enforce rules also fails in some very important ways.

First, it means someone has to decide in what cases the rules were broken. That means you either need to depend on fallible, easily biased human judgment – subject to all its racism, nepotism, tribalism, and whatever – or algorithmize the rules so that “be nice” gets formalized into a two thousand page definition of niceness so rigorous that even a racist nepotist tribalist judge doesn’t have any leeway to let your characteristics bias her assessment of whether you broke the niceness rules.

Second, transaction costs. Suppose in every interaction you had with another person, you needed to check a two thousand page algorithm to see if their actions corresponded to the Legal Definition of Niceness. Then if they didn’t, you needed to call the police to get them arrested, have them sit in jail for two weeks (or pay the appropriate bail) until they can get to trial. The trial itself is a drawn-out affair with celebrity lawyers on both sides. Finally, the judge pronounces verdict: you really should have said “please” when you asked her to pass the salt. Sentence: twelve milliseconds of jail time.

Third, it is written: “If you like laws and sausages, you should never watch either one being made.” The law-making apparatus of most states – stick four hundred heavily-bribed people who hate each other’s guts in a room and see what happens – fails to inspire full confidence that its results will perfectly conform to ideal game theoretic principles.

Fourth, most states are somewhere on a spectrum between “socially contracted regimes enforcing correct game theoretic principles among their citizens” and “violent psychopaths killing everybody and stealing their stuff”, and it has been historically kind of hard to get the first part right without also empowering the proponents of the second.

So it’s – surprise, surprise – a tradeoff.

There’s a bunch of rules which, followed universally, would lead to the Economists’ Paradise. If the importance of keeping these rules agreed-upon and well-enforced outweighs the dangers of algorithmization, transaction costs, poor implementation, and tyranny, we make them State Laws. In an ideal state with very low transaction costs, minimal risk of tyranny, and legislave excellence, the cost of the tradeoff goes down and we can reap gains by making more of them State Laws. In a terrible state with high transaction costs that has been completely hijacked by self-interest, the cost of the tradeoff goes down and fewer of them are State Laws.

III.

Let’s return to the bullying example from Part I.

It would seem there ought not to be bullying in the Economists’ Paradise. For if most people dislike bullying, they can coordinate an alliance to not bully one another, and to punish any bullies they find.

On the contrary, suppose there are two well-delineated groups of people, Jocks and Nerds. Jocks are bullies and have no fear of being bullied themselves; they also don’t care about social exclusion by the Nerds against them. Nerds are victims of bullies and never bully others; their exclusion does not harm the Jocks. Now it seems that there might be bullying, for although all the Nerds would agree not to bully, and to exclude all bullies, and although all the Jocks might coordinate an alliance not to bully other Jocks, there is nothing preventing the Jocks from bullying the Nerds.

I answer that there are several practical considerations that would prevent such a situation from coming up. The most important is that if bullying is negative-sum – that is, if it hurts the victim more than it helps the bully – then it’s an area ripe for Kaldor-Hicks improvement. Suppose there is anything at all the Nerds have that the Jocks want. For example, suppose that the Nerds are good at fixing people’s broken computers, and that a Jock gains more utility from knowing he can get his computer fixed whenever he needs it than from knowing he can bully Nerds if he wants. Now there is the opportunity for a deal in which the Nerds agree to fix the Jocks’ computers in exchange for not being bullied. This is Pareto-optimal: the Nerds’ lives are better because they avoid bullying, and the Jocks’ lives are better because they get their computers fixed.

Objection: numerous problems prevent this from working in real life. Nerds and Jocks aren’t coherent blocs, bullies are bad negotiators. More fundamentally, this is essentially paying tribute, and on the “millions for defense, not one cent for tribute” principle, you should never pay tribute or else you encourage people who wouldn’t have threatened you otherwise to threaten you just for the tribute. But the assumption that Economists’ Paradise solves all game theoretic problems solves these as well. We’re assuming everyone who should coordinate can coordinate, everyone who should negotiate does negotiate, and everyone who should make precommittments does make precommittments.

A more fundamental objection: what if Nerds can’t fix computers, or Jocks don’t have them? In this case, the tribute analogy saves us: Nerds can just pay Jocks a certain amount of money not to be bullied. Any advantage or power whatsoever that Nerds have can be converted to money and used to prevent bullying. This sounds morally repugnant to us, but in a world where blackmail and incentivizing bad behavior are assumed away by fiat, it’s just another kind of Pareto-optimal improvement, certainly better than the case where Nerds waste their money on things they want less than not being bullied yet are bullied anyway. And because of our Economists’ Paradise assumption, Jocks charge a fair tribute rate – exactly the amount of money it really costs to compensate them for the utility they would get by beating up Nerds – and feel no temptation to extort more.

Now, I’m not sure bullying would even come up as an option in an Economists’ Paradise, because if it’s a zero- or negative- sum game trying to get status among your fellow Jocks, the Jocks might ban it on their own as a waste of time. But even if Jocks do get some small amount of positive utility out of it directly, we should expect bullying to stop in an Economists’ Paradise as long as Nerds control even a tiny amount of useful resources they can use to placate the Jocks. If Nerds control no resources whatsoever, or so few resources that they don’t have enough left to pay tribute after they’ve finished buying more important things, then we can’t be sure there won’t be bullying – this is where the Economists’ Paradise starts to differ from the Utilitarians’ Paradise – but we’ll return to this possibility later.

Now I want to highlight a phrase I just used in this argument.

“If bullying is negative-sum – that is, if it hurts the victim more than it helps the bully – then it’s an area ripe for Kaldor-Hicks improvement”

This looks a lot like (naive) utilitarianism!

What it’s saying is “If bullying decreases utility (by hurting the Nerd more than it helps the Jock) then bullying should not exist. If bullying increases utility (by helping the Jock more than it hurts the Nerd) then maybe bullying should exist. Or, to simplify and generalize, “do actions that increase utility, but not other actions.”

Can we derive utilitarian results by assuming Economists’ Paradise? In many cases, yes. Suppose trolley problems are a frequent problem in your society. In particular, about once a day there is a runaway trolley in heading on a Track A with ten people, but divertable to a Track B with one person (explaining why this happens so often and so consistently is left as an exercise for the reader). Suppose you’re getting up in the morning and preparing to walk to work. You know a trolley problem will probably happen today, but you don’t know which track you’ll be on.

Eleven people in this position might agree to the following pact: “Each of us has a 91% chance of surviving if the driver chooses to flip the switch, but only a 9% chance of surviving if the person chooses not to. Therefore, we all agree to this solemn pact that encourages the driver to flip the switch. Whichever of us will be on Track B hereby waives his right to life in this circumstance, and will encourage the driver to switch as loudly as all of the rest of us.”

If the driver were presented with this pact, it’s hard to imagine her not switching to Track B. But if the eleven Trolley Problem candidates were permitted to make such a pact before the dilemma started, it’s hard to imagine that they wouldn’t. Therefore, the Economists’ Paradise assumption of perfect coordination produces the correct utilitarian result to the trolley problem. The same methodology can be extended to utilitarianism in a lot of other contexts.

Now we can go back to that problem from before: what if Nerds have literally nothing Jocks want, and Jocks haven’t decided among themselves that bullying is a stupid status game that wastes their time, and we’re otherwise in the Least Convenient Possible World with regards to stopping bullying. Is there any way assuming Economists’ Paradise solves the problem then?

Maybe. Just go around to little kids, age two or so, and say “Look. At this point, you really don’t know whether you’re going to grow up to be a Jock or a Nerd. You want to sign this pact that everyone who grows up to be a Jock promises not to bully everyone who grows up to be a Nerd?” Keeping the same assumption that bullying is on net negative utility, we expect the toddlers to sign. Yeah, in the real world two-year olds aren’t the best moral reasoners, but good thing we’re in Economists’ Paradise where we assume such problems away by fiat.

Is there an Even Less Convenient Possible World? Suppose bullying is racist rather than popularity-based, with all the White kids bullying the Black kids. You go to the toddlers, and the white toddlers retort back “Even at this age, we know very well that we’re White, thank you very much.”

So just approach them in the womb, where it’s too dark to see skin color. If we’re letting two year olds sign contracts, why not fetuses?

Okay. One reason might be because we’ve just locked ourselves into being fanatically pro-life merely by starting with weird assumptions. Another reason might be that we could counterfactually mug fetuses by saying stuff “You’re definitely a human, but for all you know the world is ruled by Lizardmen with only a small human slave population, and if Lizardmen exist then they will torture any humans who did not agree in the womb that, if upon being born and finding that Lizardmen did not exist, they would spend all their time and energy trying to create Lizardmen.”

(Frick. I think I just created a new basilisk by breeding the Rokolisk and the story of 9-tsiak. Good thing it only works on fetuses.)

(I wonder if this is the first time in history anyone has ever used the phrase “counterfactually mug fetuses” as part of a serious intellectual argument.)

So I’m not saying this theory doesn’t have any holes in it. I’m just saying that it seems, at least in principle, like the idea of Economists’ Paradise might be sufficient to derive Rawls’ Veil of Ignorance, which in turn bridges the chasm that separates it from Utilitarians’ Paradise.

IV.

I think this is the solution to the various questions raised in You Kant Dismiss Universalizability. The reason universalizability is important is that the universal maxims are the agreements that everyone or nearly everyone would sign. This leads naturally to something like utilitarianism for the reasons mentioned in Part III. And it doesn’t produce the weird paradoxes like “If morality is universalizability, how do you know whether a policeman overpowering and imprisoning a criminal universalizes to ‘police should be able to overpower and imprison criminals’ or ‘everyone should be able to overpower and imprison everyone else’?” Everyone would sign an agreement allowing the first, but not the second.

But before we really explore this, a few words on “everyone would sign”.

Suppose one very stubborn annoying person in Economists’ Paradise refused to sign an agreement that police should be allowed to arrest criminals. Now what?

“All game theory is solved perfectly” is a really powerful assumption, and the rest of the world has a lot of leverage over this one person. Suppose everyone else said “You know, we’re all signing an agreement that none of us are going to murder one another, but we’re not going to let you into that agreement unless you also sign this agreement which is very important to us.”

Actually, that sounds too evil and blackmailing. There’s a better way to think of it. Suppose there are one hundred agreements. 99% of the population agrees to each, and in fact it’s a different 99% each time. That is, divide the population into one hundred sets of 1%, and each set will oppose exactly one of the agreements – there is no one who opposes two or more. Each agreement only works (or works best) when one hundred percent of the population agrees to it.

Very likely everyone will strike a deal that each of the one hundred 1% blocs agrees to to give up its resistance to the one agreement they don’t like, in exchange for each of the other ninety nine 1% blocs giving up its resistance to the agreements they don’t like.

Now we’re getting into meta-level Pareto improvements. If a pact would be positive-sum for people to agree on, the proponents of the pact can offer everyone else some compensation for them signing the pact. In theory it could be money or computer-fixing, but it might also be agreement with some of their preferred pacts.

There are a few possible outcomes of this process in Platonic Economists’ Paradise, both interesting.

One is a patchwork of agreements, where everyone has to remember that they’ve signed agreements 5, 12, 98, and 12,671, but their next-door neighbor has signed agreements 6, 12, 40, and 4,660,102, so they and their neighbor are bound to cooperate on 12 but no others.

Another is that everyone is able to get their desired pacts to cohere into a single really big pact that they are all able to sign off upon. Maybe there are a few stragglers who reject it at first, but this ends up being a terrible idea because now they’re not bound by really important agreements like “don’t murder” or “don’t steal”, so eventually they give in.

A third possibility combining the other two offers a unifying principle behind Whose Utilitarianism and Archipelago and Atomic Communitarianism. Everyone agrees to some very basic principles of respecting one another (call them “Noahide Laws”) but smaller communities agree to stricter rules that allow them to do their own thing.

But we don’t live in Platonic Economists’ Paradise. We live in the real world, where transaction costs are high and people have limited brainpower. Even if we were to try to instantiate Economists’ Paradise, it couldn’t be the one where we all have the complex interlocking patchwork agreements between one another. People wouldn’t sign off on it. Heck, I wouldn’t sign off on it. I would say “I’m not signing this until I have something that makes sense to me and can be implemented in a reasonable amount of time and doesn’t require me to check the List Of Everybody In The World before I know whether the guy next to me is going to murder me or not.” Practical concerns provide a very strong incentive to reject the patchwork solution and force everyone to cohere. So in practice – and I realize how hokey it is to keep talking about game-theoretically-perfect infinitely-rational infinitely-honest agents negotiating all possible agreements among one another, and then add on the term “in practice” to represent that they have trouble remembering what they decided – but in practice they would all have very large incentives to cohere upon a single solution that balances out all of their concerns.

We can think of this as moving along an axis from “Platonic” to “practical”. As we progress further, complicated agreements collapse into simpler agreements which are less perfect but easier to enforce and remember. We start to make judicious use of Schelling fences. We move from everyone in the world agreeing on exactly what people can and can’t do to things like “Well, you know your intuitive sense of niceness? You follow that with me, and I’ll follow that with you, and we’ll assume everyone else is in on the deal until they prove they aren’t.”

A metaphor: in a dream, your soul goes to Economists’ Paradise and agrees on the perfect patchwork of maxims with all the other souls there. But as dawn approaches, you realize when you awaken you will never remember all of what you agreed upon, and even worse, all the other souls there are going to wake up and not remember what they agreed upon either. So all of you together frantically try to compress your wisdom into a couple of sentences that the waking mind will be able to recall and follow, and you end up with platitudes like “Use your intuitive sense of niceness” and “do unto others as you would have others do unto you” and “try to maximize utility” and “anybody who treats you badly, assume they’re not in on the deal and feel free to treat them badly too, but not so badly that you feel like you can murder them or something.”

A particularly good platitude/compression might be “Work very hard to cultivate the mysterious skill of figuring out what people in the Economists’ Paradise would agree to, then do those things.” If you’re Greek, you can even compress it into a single word: phronesis.

V.

So by now it’s probably pretty obvious that this is an attempt to ground morality. I think the general term for the philosophical school involved is “contractualism”.

Many rationalists seem to operate on something like R.M. Hare’s two-level utilitarianism. That is, utilitarianism is the correct base level of morality, but it’s very hard to do, so in reality you’ve got to make do with less precise but more computationally tractable heuristics, like deontology and virtue ethics. Occasionally, when deontology or virtue ethics contradict themselves, each other, or your intuitions, you may have to sit down and actually do the utilitarianism as best you can, even though it will be inconvenient and very philosophically difficult.

For example, deontology may say things like “You must never kill another human being.” But in the trolley problem, the correct deontological action seems to violate our moral intuitions. So we go up a level, calculate the utility (which in this case is very easy, because it’s a toy problem invented entirely for the purposes of having easy utility calculation) and say “Huh, this appears to be one of those rare places where our deontological heuristics go wrong.” Then you switch the trolley.

But utilitarianism famously has problems of its own. You need a working definition of utility, which means not only distinguishing between hedonic utilitarianism, preference utilitarianism, etc, but coming up with a consistent model for measuring the strength of happiness and preferences. You need to distinguish between total utilitarianism, average utilitarianism, and a couple of other options I forget right now. You need a discount rate. You need to know whether creating new people counts as a utility gain or not, and whether removing people (isn’t that a nice euphemism) can even be counted as a negative if you make sure to do it painlessly and without any grief to those who remain alive. You need a generalized solution to Pascal’s Wagers and utility monsters. You need to know whether to accept or fudge away weird results like that you may be morally obligated to live your entire life to maximize anti-malaria donations. All of this is easy at the tails and near-impossible at the margins.

My previous philosophy was “Yeah, it’s hard, but I bet with sufficient intelligence, we can think up a consistent version of utilitarianism with enough epicycles that it produces an answer to all of these issues that most people would recognize as at least kind of sane. Then we can just go with that one.”

I still believe this. But that consistent version would probably fill a book. The question is: what is the person who decides what to put in this book doing? On what grounds are they saying “total utilitarianism is a better choice than average utilitarianism”? It can’t be on utilitarian grounds, because you can’t use utilitarian grounds until you’ve figured out utilitarianism, which you haven’t done until you’ve got the book. When God was deciding what to put in the Bible, He needed some criteria other than “make the decision according to Biblical principles”.

The standard answer is “we are starting with our moral intuitions, then simplifying them to a smaller number of axioms which eventually produce them”. But if the axioms fill a book and are full of epicycles to address individual problems, we’re not doing a very good job.

I mean, it’s still better than just trying to sort out all individual issues like “what is a just war?” on their own, because people will answer that question according to their personal prejudices (is my tribe winning it? Then it is so, so just) and if we force them to write the utilitarianism book at least they’ve got to come up with consistent principles and stick to them. But it is highly suboptimal.

And I wonder whether maybe the base level, the one that actually grounds utilitarianism, is contractualism. The idea of a Platonic parliament in which we try to enact all beneficial agreements. Under this model, utilitarianism, deontology, and virtue ethics would all be different heuristics that we use to approximate contractualism, the fragments we remember from our beautiful dream of Paradise.

I realize this is kind of annoying, especially in the sense of “the next person who comes along can say that utiltiarianism, deontology, virtue ethics, and contractualism are heuristics for whatever moral theory they like, which is The Real Thing”. But the idea can do work! It particular, it might help esolve some of the standard paradoxes of utilitarianism.

First, are we morally obligated to wirehead everyone and convert the entire universe into hedonium? Well, would you sign that contract?

Second, is there anything wrong with killing people painlessly if they won’t be missed? After all, it doesn’t seem to cause any pain or suffering, or even violate any preferences – at least insofar as your victim isn’t around to have their preferences violated. Well, would you sign a contract in which everyone agrees not to do that?

Third, are we morally obligated to create more and more people with slightly above zero utility, until we are in an overcrowded slum world with everyone stuck at just-above-subsistence level (the Repugnant Conclusion)? Well, if you were making an agreement with everyone else about what the population level should be, would you suggest we do that? Or would you suggest we avoid it?

(this can be complicated by asking whether potential people get a seat in this negotiation, but Carl Shulman has a neat way to solve that problem)

Fourth, the classic problem of defining utility. If utility can be defined ordinally but not cardinally (ie you can declare that stubbing your toe is worse than a dust speck in the eye, but you can’t say something like it’s exactly 2.6 negative utilons) then utilitarianism becomes very hard. But contractualism doesn’t become any harder, except insofar as it’s harder to use utilitarianism as a heuristic for it.

I am not actually sure these problems are being solved, and I’m not just being led astray by contractualism being harder to model than utilitarianism and so it is easier for me to imagine them solved. But at the very least, it might be that contractualism is a different angle from which to attack these problems.

Of course, contractualism has problems of its own. It might be that different ways of doing the negotiations would lead to very different results. It might also be that the results would be very path-dependent, so that making one agreement first would end with a totally different result than making another agreement first. And this would be a good time to admit I don’t know that much formal game theory, but I do know there are multiple Nash equilibria and Pareto-optimal endpoints in a lot of problems and that in general there’s no such thing as “the correct game theoretic solution to this problem”, only solutions that fit more or fewer desirability criteria.

But to some degree this maps onto our intuitions about morality. One of the harder to believe things about utilitarianism was that it suggested there was exactly one best state of the universe. Our intuitions are very good at saying that certain hellish dystopias are very bad, and certain paradises are very good, but extrapolating them out to say there’s a single best state is iffy at best. So maybe the ability of rigorous game theory to end in a multitude of possible good outcomes is a feature and not a bug.

I don’t know if it’s possible for certain negotiation techniques to end in extreme local minima where things don’t end out as a paradise at all. I mean, I know there’s lots of horrible game theory like the Prisoner’s Dilemma and the Pirate’s Dilemma and so on, but I’m defining the “good game theory” of the Economists’ Paradise to mean exactly the rules and coordination power you need to not do those kinds of things.

But there’s also a meta-level escape vent. If a certain set of negotiation techniques would lead to a local minimum where everything is Pareto-optimal but nobody is happy, then everyone would coordinate to sign a pact not to use those negotiation techniques.

VI.

To sum up:

The Economists’ Paradise of solved coordination problems would be enough to keep everyone happy and prosperous and free. We ourselves could live in that paradise if we followed its rules, which involve negotiation of and adherence to agreements according to good economist and game theory, but these rules are hard to determine and hard to enforce.

We can sort of guess at what some of these rules can be, and when we do that we can try to follow them. Some rules lend themselves to State enforcement. Others don’t and we have to follow them quietly in the privacy of our own hearts. Sometimes the rules include rules about ostracizing or criticizing those who don’t follow the rules effectively, and so even the ones the State can’t enforce are sorta kinda enforceable. Then we can spread them through a series of walled gardens and spontaneous order divine intervention.

The exact nature of the rules is computationally intractable and so we use heuristics most of the time. Through practical wisdom, game theory, and moral philosophy, we can improve our heuristics and get to the rules more closely, with corresponding benefits for society. Utilitarianism is one especially good heuristic for the rules, but it’s also kind of computationally intractable. Utilitarianism helps us approximate contractualism, and contractualism helps us resolve some of the problems of utilitarianism.

One problem of utilitarianism I didn’t talk about is that it isn’t very inspirational. Following divine law is inspirational. Trying to become a better person, a heroic person, is inspirational. Utilitarianism sounds too much like math. I think contractualism solves this problem too.

Consider. There is an Invisible Nation. It is not a democracy, per se, but it is something of a republic, where each of us is represented by a wiser, stronger version of ourselves who fights for our preferences to be enacted into law. Its legislature is untainted by partisanship, perfectly efficient, incorruptible, without greed, without tyranny. Its bylaws are the laws of mathematics; its Capitol Building stands at the center of Platonia.

All good people are patriots of the Invisible Nation. All the visible nations of the world – America, Canada, Russia – are properly understood to be its provinces, tasked with executing its laws as best they can, and with proper consideration to the unique needs of the local populace. Some provinces are more loyal than others. Some seem to be in outright rebellion. The laws of the Invisible Nation contain provisions about what to do with provinces in rebellion, but they are vague and difficult to interpret, and its patriots can disagree on what they are.

Maybe one day we will create a superintelligence that tries something like Coherent Extrapolated Volition – which I think we have just rederived, kind of by accident. The various viceroys and regents will hand over their scepters, and the Invisible Nation will stand suddenly revealed to the mortal eye. Until then, we see through a glass darkly. As we learn more about our fellow citizens, as we gain new modalities of interacting with them like writing, television, the Internet – as we start crystallizing concepts like rights and utility and coordination – we become a little better able to guess.