Here's a puzzle. Take the dictator game. It's a one-shot two-player anonymous experimental game. The dictator is given a sum of money, say, $10. She may keep it all or give some amount to the responder. Strict economic rationality suggests that the dictator would choose to keep it all. Evolutionary psychology concurs: Why give resources to a complete stranger?

It turns out, however, that many dictators actually give some portion of the money to the anonymous stranger. Reviewing the literature on dictator game experiments, the modal amount left by dictators for responders hovers around 30 percent [PDF]. Why are dictators generous to someone that they will never meet or interact with again?

A new study [PDF] in the latest issue of the Proceedings of the National Academy of Science by University of California, Santa Barbara evolutionary psychologists Andrew Delton, Max Krasnow, Leda Cosmides, and John Tooby suggests that human generosity evolved as a response to having to make cooperative decisions in the face of social uncertainty. Specifically, the uncertainty about whether or not any interaction is a one-shot deal or could be repeated in the future.

As the U.C. Santa Barbara researchers note, the results of experiments like the dictator game not only "violate standard theories drawn from economics, but they also violated the predictions of widely accepted models of fitness maximization in evolutionary biology—models that (in the absence of kinship) similarly predict selfishness. Natural selection is relentlessly utilitarian, and is expected to replace designs that unnecessarily give up resources without return with those that retain those resources for enhanced reproduction or kin-directed investment." On the face of it, natural selection should weed out nice guys.

In recent years, various anthropologists and economists have suggested that group selection could explain the apparent paradox of human prosociality. This new study argues that group selection theory is not necessary.

To probe the puzzle of apparent excessive human generosity to strangers, the researchers ran computer simulations in which agents interacted with one another over 10,000 generations. The agents were set up to evolve over time. The agents that evolved more effective strategies (gained greater resources) in dealing with other agents left more descendants as the simulation played out. The key insight is that to behave differently in one-shot versus repeated interactions necessarily requires the capacity to distinguish between the two situations. In fact, agents are always uncertain about whether an interaction is one-time or might be repeated. Inside the simulation, this uncertainty has profound effects on how agents decided to interact with strangers.

An agent can make two different errors. He can mistakenly assume the interaction will be repeated and bear the risk of being exploited in a one-shot interaction. Or he can exploit a stranger when he mistakenly assumes that the interaction is a one-shot and risk losing greater benefits that would have accrued over the long term in what turns out to be repeated interactions. The experimenters note, "If the two errors inflict costs of different magnitudes, selection will favor a betting strategy that buys a reduction in the more expensive error type with an increase in the cheaper type." The more costly error is generally to assume that you will not have repeated interactions with another agent. Thus, the researchers find, "These asymmetric costs evolutionarily skew decision-making thresholds in favor of cooperation; as a result, fitter strategies will cooperate 'irrationally' even when given strong evidence that they are in one-shot interactions." In a press release, Cosmides explained, "Without knowing why, the mind is skewed to be generous to make sure we find and cement all those valuable, long-term relationships."

The U.C. Santa Barbara experimenters ran two types of simulations. In one, the agents obtain information about the probabilities that they are facing a one-shot or repeated interaction with another agent. In one such run, if an agent believes that she is facing a one-shot interaction there is only a 16 percent chance that she is wrong. In this version of the simulation, the average length of repeated interactions, when they occur, is a relatively short 5 to 10 rounds. In this case, "agents with a one-shot belief nonetheless evolve to cooperate a remarkable 87 to 96 percent of the time, respectively." Lower the probability that an agent is wrong about an interaction being one-shot to just 7 percent, combined with an average of 10 interactions, and cooperation will still evolve among agents with a one-shot belief 47 percent of the time.

In another version of the simulation, agents conformed to the rule: defect if you believe that the interaction is one-shot; otherwise, cooperate. However, they were allowed to evolve thresholds of evidence for deciding when an interaction is likely to be repeated. Because of the higher costs associated with being wrong about repeated interactions, the agents tended to err on the side of caution, and evolved belief thresholds that led them to cooperate about 60 percent of the time when their interaction is actually one-shot. On the flip side, these cautious belief thresholds meant that they ended up defecting only one percent of the time when their interaction was repeated.

Other research shows that concern about reputation also helps to motivate people to behave cooperatively. The U.C. Santa Barbara researchers speculate, "If defection damages one's reputation among third parties, thereby precluding cooperation with others aside from one's current partner, defection would be selected against far more strongly." John Tooby, in a press release, asserts that his group's research supports the happy conclusion, "People who help only when they can see a gain do worse than those who are motivated to be generous without always looking ahead to see what they might get in return." In other words, being nice is a winning strategy when it comes to economics and evolution.

Ronald Bailey is Reason's science correspondent. His book Liberation Biology: The Scientific and Moral Case for the Biotech Revolution is now available from Prometheus Books.