Is there such a thing as an objective basis of morality? For some time, in secular circles, the idea has seemed absurd. Morality is what we choose it to be. We are free to do what we like so long as we don’t harm others. Moral judgments are not truths but choices. There is no way of getting from “is” to “ought,” from description to prescription, from facts to values, from science to ethics. This was the received wisdom in philosophy for a century after Nietzsche had argued for the abandonment of morality – which he saw as the product of Judaism – in favor of the “will to power.”

Recently, however, an entirely new scientific basis has been given to morality from two surprising directions: neo-Darwinism and the branch of mathematics known as Games theory. As we will see, the discovery is intimately related to the story of Noah and the covenant made between G-d and humanity after the Flood.




Games theory was invented by one of the most brilliant minds of the 20th century, John von Neumann (1903-1957). He realized that the mathematical models used in economics were unrealistic and did not mirror the way decisions are made in the real world. Rational choice is not simply a matter of weighing alternatives and deciding between them. The reason is that the outcome of our decision often depends on how other people react to it, and usually we cannot know this in advance. Games theory, von Neumann’s invention in 1944, was an attempt to produce a mathematical representation of choice under conditions of uncertainty. Six years later, it yielded its most famous paradox, known as the Prisoner’s Dilemma.

Imagine two people arrested by the police under suspicion of committing a crime. There is insufficient evidence to convict them on a serious charge; there is only enough to convict them of a lesser offense. The police decide to encourage each to inform against the other. They separate them and make each the following proposal: if you testify against the other suspect, you will go free, and he will be imprisoned for ten years. If he testifies against you, and you stay silent, you will be sentenced to ten years in prison, and he will go free. If you both testify against the other, you will each receive a five-year sentence. If both of you stay silent, you will each be convicted of the lesser charge and face a one-year sentence.

It doesn’t take long to work out that the optimal strategy for each is to inform against the other. The result is that each will be imprisoned for five years. The paradox is that the best outcome would be for both to remain silent. They would then only face one year in prison. The reason that neither will opt for this strategy is that it depends on collaboration. However, since each is unable to know what the other is doing – there is no communication between them – they cannot take the risk of staying silent. The Prisoner’s Dilemma is remarkable because it shows that two people, both acting rationally, will produce a result that is bad for both of them.

Eventually, a solution was discovered. The reason for the paradox is that the two prisoners find themselves in this situation only once. If it happened repeatedly, they would eventually discover that the best thing to do is to trust one another and cooperate.

In the meantime, biologists were wrestling with a phenomenon that puzzled Darwin. The theory of natural selection – popularly known as the survival of the fittest – suggests that the most ruthless individuals in any population will survive and hand their genes on to the next generation. Yet almost every society ever observed values individuals who are altruistic: who sacrifice their own advantage to help others. There seems to be a direct contradiction between these two facts.

The Prisoner’s Dilemma suggested an answer. Individual self-interest often produces bad results. Any group which learns to cooperate, instead of compete, will be at an advantage relative to others. But, as the Prisoner’ Dilemma showed, this needs repeated encounters – the so-called Iterated (= repeated) Prisoner’s dilemma. In the late 1970s, a competition was announced to find the computer program that did best at playing the Iterated Prisoner’s Dilemma against itself and other opponents.