Ever wonder why some people have a tendency to be nice—even in situations where it costs them—while others are constantly out for themselves? Psychologists from Yale University have built a formal mathematical model, combining game theory with ideas from behavioral economics, to show how humans evolved to develop two distinct strategies for cooperation. And while some people lean towards selflessness, others are consistently selfish.

In the standard game theory set up, prisoner’s dilemma (explained in full in the video below), agents benefit most if they act selfishly and refuse to cooperate with their partner. The iconic set-up asks two people to imagine that they are prisoners from the same gang who have to decide whether or not to betray each other or stay silent (if one betrays the other, the betrayer benefits; if they both stay silent, they both benefit).

Game theory decision-making is based entirely on reason, but humans don’t always behave rationally. David Rand, assistant professor of psychology, economics, cognitive science, and management at Yale University, and psychology doctoral student Adam Bear incorporated theories on intuition into their model, allowing agents to make a decision either based on instinct or rational deliberation.

In the model, there are multiple games of prisoners’ dilemma. But while some have the standard set-up, others introduce punishment for those who refuse to cooperate with a willing partner. Rand and Bear found that agents who went through many games with repercussions for selfishness became instinctively cooperative, though they could override their instinct to behave selfishly in cases where it made sense to do so.

However, those who became instinctively selfish were far less flexible. Even in situations were refusing to cooperate was punished, they would not then deliberate and rationally choose to cooperate instead.

In the paper, published in Proceedings of the National Academy of Sciences on 11 January, the authors write:

“We find that, across many types of environments, evolution only ever favors agents who (i) always intuitively defect, or (ii) are intuitively predisposed to cooperate but who, when deliberating, switch to defection if it is in their self-interest to do so. Our model offers a clear explanation for why we should expect deliberation to promote selfishness rather than cooperation.”

Rand tells Quartz that the model is a mathematical reflection of evolution, both in terms of biological evolution and cultural evolution, namely individuals choosing to mimic effective tactics. “They look around and say, ‘Who’s doing well? Alright, I’m going to adopt that person’s strategy,’” he says.

And while the model mathematically demonstrates that deliberation only serves to undermine cooperation, it also shows that human instinct is determined by past interactions. “One of the key results you get out of the model is that depending on the fraction of interactions you have where future consequences exist, that shapes your intuition,” adds Rand.

This implies that people who grow up in families where selflessness is rewarded will develop an instinctively cooperative approach. Similarly, companies that do not deter employees from only thinking of themselves will likely have a selfish staff—even in cases where refusing to cooperate is actively harmful.

“It applies to interactions between friends, coworkers, family members—all interactions where you have the chance to do something that’s costly for you but beneficial for other people,” says Rand.

Cross-culturally, it suggests that those who grow up in countries without a strong rule of law will develop an uncooperative intuition, he adds.

Which means that if you meet an instinctively selfish person, reasoning is unlikely to persuade them to cooperate. But if you are that selfish person, you can blame evolution for your bad behavior.