You are probably familiar with Pascal’s Wager – the idea that it is worth believing in God in order to increase your probability of going to heaven and lower your probability of going to hell. More generally, for an expected utility maximiser it will always be worth doing something that offers any probability of an infinite utility, no matter how low that probability.

My impression is that most folks think this argument is nonsense. I am not so sure. I recently met Amanda Montgomery, who is at NYU studying the challenges that infinite values present for decision theory. In her view, nobody has produced a sound solution to Pascal’s Wager and other infinite ethics problems.

A common response, and one I had previously accepted, is that we also need to consider the possibility of a ‘professor God’ who rewards atheists and punishes believers. As long as you place some probability on this being the case, then being an atheist, as well as being a believer, appears to offer an infinite payoff. Therefore it doesn’t matter what you believe.

This logic relies on two premises. Firstly, that a*∞ = b*∞ = ∞ for any a > 0 and b > 0. Secondly, that in ranking expected utility outcomes, we should be indifferent between any two positive probabilities of an infinite utility, even if they are different. That would imply that a certainty of going to ‘Heaven’ was no more desirable than a one-in-a-billion chance. Amanda points out that while these statements may both be true, if you have any doubt that either is true (p < 1), then Pascal’s Wager appears to survive. The part of your ‘credence’ in which a higher probability of infinite utility should be preferred to a lower one will determine your decision and allow the tie to be broken. Anything that made you believe that some kinds of Gods were more likely or easy to appease than others, such as internal consistency or historical evidence, would ensure you were no longer indifferent between them.

Some might respond that it would not be possible to convert sincerely with a ‘Pascalian’ motivation. This might be true in the immediate term, but presumably given time you could put yourself in situations where you would be likely to develop a more religious disposition. Certainly, it would be worth investigating your capacity to change with an infinite utility on the line! And even if you could not sincerely convert, if you believed it was the right choice and had any compassion for others, it would presumably be your duty to set about converting others who could.

On top of the possibility that there is a God, it also seems quite imaginable to me that we are living in a simulation of some kind perhaps as a research project of a singularity that occurred in a parent universe. There is another possible motivation for running such simulations. I am told that if you accept certain decision theories, it would appear worthwhile for future creatures to run simulations of the past, and reward or punish the participants based on whether they acted in ways that were beneficial or harmful to beings expected to live in the future. On realising this, we would then be uncertain whether we were in such a simulation or not, and so would have an extra motivation to work to improve the future. However, given finite resources in their universe, these simulators would presumably not be able to dole out infinite utilities, and so would be dominated, in terms of expected utility, by any ‘supernatural’ creator that could.

Extending this point, Amanda notes the domination of ‘higher cardinality’ infinities over lower cardinalities. The slightest probability of an infinity-aleph-two utility would always trump a certain infinity-aleph-one. I am not sure what to do about that. The issue has hardly been researched by philosophers and seems like a promising area for high impact philosophy. I would appreciate anyone who can resolve these weird results so I can return to worrying about ordinary things!

GD Star Rating

loading...