In a (now famous) 1967 article in Science, ecologist Garrett Hardin discussed a situation he called the "tragedy of the commons." To illustrate, he described a group of cattle ranchers grazing their herds on public land. For each rancher, the decision to add another cow is easy: for every cow they can add, they (obviously) gain the entire value of that animal come market time. Is there a cost? Yes, each additional cow depletes the shared resource of the pasture a bit more, but the cost is spread over all the ranchers, which makes the incremental change that much more difficult to detect.

This arrangement may work for quite a while—as long as the number of cattle is much smaller than the land can support—but as the limit is reached, the effects become clear and the bill comes due. Despite increasingly problematic degradation, the equation stays the same: it's still in the immediate self-interest of each rancher to add another cow, even though it's not in the long-term interest of anyone. It's a focused benefit and a distributed cost. A rancher may even aim to let others deal with the problem by downsizing their herds while avoiding that revenue loss himself.

Of course, there are other important factors that can influence the responses. For example, the higher that rancher perceives the risk of losing the pasture land, the more likely he or she is to cooperate. The authors of this new paper explore what happens when actors can alter their choices over time, rather than the one-shot affairs used in most models.

To investigate the question mathematically, they describe the following game: Hypothetical participants start with an equal endowment of money. Faced with a tragedy of the commons, they are given two choices: cooperate (contribute a fraction of your endowment to fixing the problem) or defect (keep your entire endowment for yourself). If a minimum group investment is not met, everyone loses all their money. This adds a long-term consideration of risk to short-term choices.

Over time, cooperators will increase as long as cooperating is perceived to be more successful than defecting. Cooperation will be more successful for small contributions and high risks. The size of the minimum group investment will also modulate behavior; if it's high, it will be more difficult to initiate cooperation. But there's a sort of "critical mass"—once you've reached that, a high minimum investment will lead to a high degree of cooperation.

This is all reasonably intuitive. If you don't have to pitch in much to fix a huge problem, you'll take that deal. If the solution to the problem seems out of reach, you may consider contributing to be a waste until it looks tractable; then you'll want to pitch in.

Clearly, there's a large social component to all this as well, and the researchers found the dynamics of small groups to be quite different from large ones. This is where it gets interesting. They found that, as you decrease the number of participants in the game, social influence makes the group more likely to cooperate. We can relate to that as well. Think back to our ranchers—if there are only three of them, it's much less likely that one will try to freeload while letting the others deal with the problem. Why? Because the feeling of accountability grows. There's social pressure to help shoulder the burden.

The researchers apply this to nations as well. Their model indicates that small groups of nations focused on their own regions are more likely to forge an agreement than, say, all the parties at a global summit. That suggests that we may be going about dealing with climate change in an inefficient way. It gets better—the authors consider another model where a complex network of small groups is considered, and find that large-scale cooperation builds even more easily. As one group starts cooperating, it spurs cooperation in other groups connected with it. It's a pretty powerful suggestion—rather than trying to achieve global consensus on action at a worldwide summit, perhaps we should be building smaller partnerships with concrete, local goals. That may actually be the quickest way to get the ball rolling towards serious solutions.

It's also true, though, that perceived risk is a still a governing factor in all this. As risk becomes reality, action will come along with it no matter the organizational structure. The question is, how large will the effects get before the world is moved to action?

PNAS, 2011. DOI: 10.1073/pnas.1015648108 (About DOIs).

Listing image by Photo by Stig Nygaard