Game theory has provided researchers in a variety of fields, from psychology to economics, an opportunity to test human behaviors under controlled conditions. It allows big questions—are humans rational actors when money's on the line, for example—to be tested in situations where behaviors that deviate from expectations are easy to detect. The Ultimatum Game is one example of these experiments, and it has been used to show that humans aren't purely rational when it comes to monetary decisions, as they appear willing to make financial sacrifices in order to punish others in the name of fairness. A paper that will appear at PNAS this week takes things a step further and shows that people will still reject unfair monetary offers, even when the only one they punish is themselves.

The basic rules of the Ultimatum Game are simple. One person is given a stack of cash, and told to divide it between themselves and a second party. That second party is then given the chance to accept or reject the offer; if it's rejected, neither of them get any money. Clearly, any of this free money should be better than nothing, so under assumptions of strictly rational behavior, you might expect all offers to be accepted.

They're not. Things in the neighborhood of a 50/50 split are accepted, but as the proportions shift to where the person issuing the ultimatum tries to keep seventy percent of the total, rejections increase. By the time they hit an 80/20 split, nearly 70 percent of the offers are rejected, even though that 20 percent of the total cash would leave the recipient better off than where they started.

It's still possible to interpret this behavior as being rational within a social context. A lot of human behavior, and that of other primates, seems to be focused on ensuring cooperative behavior within small groups. The rejection of offers within the Ultimatum Game can be viewed as a form of punishment for unfair behavior. In that light, the rejection may make sense to the degree that the immediate loss of money provides a long-term incentive for fair and cooperative behavior within a group. Rational economic behavior is restored.

The new paper pretty much blows that explanation out of the water by testing individuals using a couple of variations of the Ultimatum Game. In the first, which the authors term "the Impunity Game," the person making the offer gets their share of the cash regardless of whether the offer is accepted or not. In this game, the only consequence is the potential for guilt caused by the knowledge that an offer was rejected. Rejection rates do drop, but they remain substantial—offers of an 80/20 split got rejected over 40 percent of the time (down from around 70 percent) despite the lack of real economic consequences.

To really nail things down, the authors conducted tests of a Private Impunity Game, in which the person who made the offer wasn't even informed of whether it was rejected or not—they simply walked away with their share of the cash. Here, even the nebulous hope that the person making the offer would feel pangs of guilt from its rejection was removed. Rejection rates were essentially unchanged. People keep rejecting offers they perceived as unfair, even if, like the proverbial tree in the forest, no one will hear their rejection.

In another hint of the nature of this response, the authors describe how a similar study was performed in which the Impunity Game was explained to participants as a series of if/then statements: "if A chooses X and B chooses Y, then A receives $i and B receives $j." Here, when subjects are forced to reason through the conditions to figure out that their rejections didn't cause any sort of financial punishment on the ones making the offer, rates or rejection were about the same as they are in the Ultimatum Game. This suggests that people can't even be bothered to perform a rational analysis when money is on the line, much less engage in rational actions.

The lack of objective analysis is also demonstrated by a number of results that indicate that changes in the levels of hormones and neurotransmitters—testosterone, serotonin, and oxytocin, for example—can all skew the statistics by changing the average response to unfair offers.

Given the fact there's essentially no way to provide a rational actor gloss to these results, the authors attempt to explain it through an emotional response that sounds much like a gorilla's chest beating. Our emotions commit us to these sorts of displays despite their irrational nature, and force us to follow through on them often enough to make sure everyone knows it's not an idle threat. Nine times out of 10, the chest beating may just be a display, but is anyone willing to risk the chance that a given instance will turn out to be the exception?

The problem with this explanation is that it adds a layer of complexity—a mechanism that ensures a degree of commitment to an emotional response—on top of what's essentially a simple situation: people act without thinking. Earlier this year, I attended a discussion entitled "Evolution and the Ethical Brain" in which researchers argued that our ethical decision making (such as how to respond to unfair financial offers) is performed by a system that operates in much the same way as those that respond to sensory input: they make snap judgments that allow us to respond quickly and get on with things. The more elaborate ethical debates that we engage in are largely attempts at post-hoc rationalizations of our earlier decisions.

Within this perspective, the snap judgment is that an offer is unfair. Sometimes, we can engage the post-hoc rationalization, in this case involving the economics of the situation, and override our ethical calculations. But, in a substantial fraction of the cases, we never get the chance, as we act on our snap decisions before that process can occur.

PNAS, 2009. DOI: 10.1073/pnas.0900636106