A Treatise on Morality

The Cooperative Advantage

Just based on the physical characteristics, alien biologists would easily dismiss humans. The only natural-born physical talent of humans is endurance running, useful in chasing down fast prey until they collapse from exhaustion, but not something that explains our status as the uncontested dominant lifeform on Earth. Even mentally we aren’t that superior: raise a child without teachers, and you will get a dysfunctional animal. Octopuses manage to learn tool use on their own, something feral children are unable to do. A lone human is simply not made to survive in the wilderness.

Our mental capabilities flourish in groups, however. A child is taught and supported by their elders until she is old enough to do her part to help. Information and skills pass along from person to person, groups organize and coordinate to achieve what none could do alone, the victims of misfortune are helped, and they in turn help others. A tribe of Homo sapiens is a formidable force on the savannah. This seems at first blush to run counter to natural selection, in the usual narrative represented as fierce competition. Survival of the fittest. But nothing about the rules says you can’t work with others. Ants, bees, dolphins and wolves show how cooperation can give an advantage to the individual. This bears a striking resemblance to what humans call morality.

Getting an advantage as an individual however doesn’t explain true self-sacrifice, like giving your life for someone. This apparent conflict is resolved by the concept of the “selfish gene”: if one of your genes makes you sacrifice yourself to save two of your siblings, or children, who also carry the gene, the gene gets an advantage, even if you personally don’t. Drone ants are infertile, but evolutionarily successful, because they help their mother, the queen, to reproduce.

Making morality to be about evolution and self-interest might rub you the wrong way. Should we only do a kind act once we know how it helps us? Should we only sacrifice ourselves for blood relatives? No, of course not. Natural selection doesn’t care, it doesn’t judge. If you refuse to maximize your reproductive fitness, the only consequence is that your genes will be very slightly less common in the human population. Not much of a sacrifice. Your desire to be good is still just as authentic, even if it has roots in selfish natural processes.

Honor Among Thieves

While the advantage of cooperation explains why animals would act morally, it doesn’t explain why animals or people would ever not act morally. Wolves and ants and humans fight with other members of their species constantly. This too, must bring some advantage. Predictably, theft, murder and rape are all desirable to the perpetrator. To capture this dynamic of when it is and is not desirable to act morally, we are going to play a little game. Or more like it, we are going to overanalyze a game.

You and your accomplice have been caught for a crime. The police only have enough evidence to put both of you in prison for a year, so your interrogator offers you a deal: snitch on your accomplice, they get five years and you go free. You quickly figure out that they were probably given the same deal, and if you both snitch, you’re both getting around three years.

Should you snitch, or should you stay silent?

The mathematics are clear: if your buddy stays quiet, you should throw them under the bus. If they snitch, you should take them down with you. One should always snitch. Yet if you both snitch, you both get three years instead of one year. This is the problem known as prisoner’s dilemma: in some situations, the win-win scenario is leaves both parties vulnerable to betrayal, and is thus unattainable.

Once you know what to look for, the abstract version of this game plays out everywhere. Parallels can be drawn to nuclear disarmament and climate change. A part of addiction can be seen as a prisoner’s dilemma game between oneself now and oneself in the future. Even choosing whether to do dishes in a shared apartment can be modeled by the same mathematics.

Shooting for the win-win scenario can only be achieved if one can trust the other player to cooperate. To develop trust, one needs to know the other player, to play repeated games with them. This works the best if there multiple different repeat interactions with different other players. Kind of like in real life.

In this situation, if the win-win payout is big enough and there are enough cooperative other players, a strategy known as “tit-for-tat” dominates: start of cooperating, then mirror the other player. This ensures the player will only get fooled once before retaliating, while working together with anyone willing to cooperate. This is what we see with morality too: benefit of the doubt is moral when given once, and stupidity when given much more than that. When someone commits a heinous act, we punish them to disincentivize them or anyone else from committing such an act.

Morality is game theory instantiated on biology. Not all morality models the prisoner’s dilemma, of course, games such as stag-hare (in which cooperation is better when others cooperate) and hawk-dove (in which cooperation is better when others betray you), or even wholly other abstract games with a similar structure can often be found. It all however follows the same principle.

Morality under this view is an absolute mathematical concept instantiated in the fuzzy real world. Often a cooperative choice in one game is a betrayal in another, and without a clear view of the payout matrices, moral problems can quickly become unsolvable. This is however only a question of imperfect knowledge, not moral subjectivity.

Hume’s Guillotine

David Hume was a Scottish philosopher in the mid-1700s. He is my personal favourite philosopher. One of my favourite ideas from him is known as Hume’s Guillotine, one of the most metal terms in philosophy.

Hume had observed that people of his time, when arguing for any standard of morality, often used premises about what is to argue what ought to be. Hume simply asked what they were doing, what logical warrant was there for such a leap. Modern philosophers, with a few exceptions, conceed that there is none. You can’t just suddenly introduce a term in a logical argument that does not previously appear: otherwise you could introduce any term and wreck the structure of logic itself.

A way to make the term “ought” (everyone says “should” now, but the terminology is 300 years old, just go with it) make sense and be usable is to define it in terms of goal-oriented behaviour: you ought do X if doing X advances your goals. This founds the otherwise vague word in the empirical concepts of goals and ways to achieve them, while retaining the most important property of the word: if one is convinced they should do X, they also feel somewhat motivated to do X.

This definition has some consequences: if someone doesn’t enjoy sushi, they ought not eat sushi, no matter how freaky you find their distaste. While this might seem reasonable, the same goes for enjoying murder: if one enjoys murder and doesn’t care for the myriad of repercussions such an act would have, it makes no sense to say they shouldn’t murder.

The concept of “ought” is thus separated from the concept of what is moral: morality, under this view, deals with objective empirical and mathematical facts, while what you ought do depends only on your goals. The general confusion between these two results from people having very similar goals, including that of acting morally; it is less difficult to see that a perfectly rational AI whose only goal in life is to make as many paperclips as possible can’t be made to care about morality; that would interfere with making paperclips!

None of this means you shouldn’t act morally even if you don’t value morality for its own sake. The very reason morality evolved was that cooperation was advantageous to the individual. Being motivated by the threat of punishment is also a reason to stay in line. But if these don’t motivate someone, you could just as well debate morality with a crocodile.

Conclusion

This moral philosophy has been hard-fought for me, and I do not know of many others who hold a similar view. I hope it works as a jumping off point for you, and as an antidote to both moral relativism and various supernatural theories of religion. As a philosopher I am merely a hobbyist, so take everything I say with a grain of salt.

Links:

Morality as Cooperation, a proper scientific paper on the scientific view on morality my position largely hinges on.

Evolution of Trust, a fun little game that explores the specifics of the game theory of morality.

Standford Encyclopedia of Philosophy on Moral Naturalism, which has strongly inspired my views.