Cooperation through useful delusions: quasi-magical thinking and subjective utility

October 16, 2013 by Artem Kaznatcheev

Economists that take bounded rationality seriously treat their research like a chess game and follow the reductive approach: start with all the pieces — a fully rational agent — and kill/capture/remove pieces until the game ends, i.e. see what sort of restrictions can be placed on the agents to deviate from rationality and better reflect human behavior. Sometimes these restrictions can be linked to evolution, but usually the models are independent of evolutionary arguments. In contrast, evolutionary game theory has traditionally played Go and concerned itself with the simplest agents that are only capable of behaving according to a fixed strategy specified by their genes — no learning, no reasoning, no built in rationality. If egtheorists want to approximate human behavior then they have to play new stones and take a constructuve approach: start with genetically predetermined agents and build them up to better reflect the richness and variety of human (or even other animal) behaviors (McNamara, 2013). I’ve always preferred Go over chess, and so I am partial to the constructive approach toward rationality. I like to start with replicator dynamics and work my way up, add agency, perception and deception, ethnocentrism, or emotional profiles and general condition behavior.

Most recently, my colleagues and I have been interested in the relationship between evolution and learning, both individual and social. A key realization has been that evolution takes cues from an external reality, while learning is guided by a subjective utility, and there is no a priori reason for those two incentives to align. As such, we can have agents acting rationally on their genetically specified subjective perception of the objective game. To avoid making assumptions about how agents might deal with risk, we want them to know a probability that others will cooperate with them. However, this depends on the agent’s history and local environment, so each agent should learn these probabilities for itself. In our previous presentation of results we concentrated on the case where the agents were rational Bayesian learners, but we know that this is an assumption not justified by evolutionary models or observations of human behavior. Hence, in this post we will explore the possibility that agents can have learning peculiarities like quasi-magical thinking, and how these peculiarities can co-evolve with subjective utilities.



We are interested in the evolution of cooperation: will agents decide to give a benefit b to other agents at a cost c to themselves. In this context, nothing interesting happens for inviscid populations, so we have to introduce spatial structure. We chose random k-regular graphs (instead of arbitrary other choices like grids or other kinds of lattices) because it allows us to use the Ohtsuki-Nowak (2006) transform to generate an analytic prediction for where the transition from cooperation to defection would be in a population of fixed-strategy agents. In particular, we expect cooperation whenever . We concentrated on the most cooperation-enticing case of 3-regular random graphs, that means that we expect to find cooperation when the inverse of the objective Prisoner’s dilemma specialization coefficient is between 0 and 1/2. When , we should see a rapid phase transition from an all cooperative to an all defective regime.

The figures above plot the levels of quasi-magical thinking versus inverse of PD specialization coefficient:

The left figure plots the proportion of quasi-magical thinkers in a setting where agent genotypes allow either standard Bayesian inference ( ) or quasi-magical thinking ( ).

) or quasi-magical thinking ( ). The right figure plots the average self-absorption value ( ) of the population, where the possible genotypes allow all levels of self-absorption between 0 and 1.

) of the population, where the possible genotypes allow all levels of self-absorption between 0 and 1. Note that the left and right figure have different scales on their x-axes. In particular, to translate from the left to the right, you have to divide the x-value by two. The horizontal black lines at and (right figure), corresponding to a proportion of quasi-magical thinkers of 0.5 and 0.3 (left figure), are plotted for easy comparison.

and (right figure), corresponding to a proportion of quasi-magical thinkers of 0.5 and 0.3 (left figure), are plotted for easy comparison. The red lines correspond to cases where all agents perceived game is fixed at its true value and only quasi-magical thinking evolves, and

the green lines represent where perceived game co-evolves with quasi-magical thinking.

Line thickness represents standard error from averaging 10 runs on random 3-regular graph with 500 agents.

When we allow all values of and initialize our simulations with genotypes selected uniformly at random, the expected value under no selection is 0.5. This is what we see in the highly specialized environments ( ) of the green line in the left figure. Together with the high variance this suggests that there is no selective pressure for self-absorption. Once we move into the low specialization regime ( ), we see in the right figure a selective pressure for low self-absorption that is almost as strong in the co-evolutionary (green) as non-co-evolutionary (red) regime. However, if we have only two discrete values corresponding to standard Bayesian and Masel’s (2007) quasi-magical thinking, then the selective pressures seem to be absent for both low and high specialization if subjective utilities are allowed to evolve (green in the left figure). This could be a coincidence based on the low specialization regime pushing toward an optimal average , but it is uncommon to have a U-shaped selective pressure for the PD game. We could test this theory by changing the mutation rates from Bayesian to quasi-magical to alter the neutral distribution, and seeing if the low specialization average follows the neutral or stays around . A second explanation (which is just a variant of the first) is that the region of is relatively neutral, but super-rationality ( ) is selected against. We could test this by looking at the distribution (instead of just average value) of in the right figure and see if the lower half looks like it is under no selection, and upper half under selection. It would be nice to have a non-eyeball test for this.

It is also important to understand how much quasi-magical thinking and subjective utilites contribute to cooperation. The best way to see this is by turning off features. In the figure at right, we have the proportion of cooperative interactions versus inverse of PD specialization coefficient. In blue we consider just the evolution of subjective utilities, in red we have just the evolution of standard Bayesian and quasi-magical thinkers, in green we have the co-evolution of both, and in black (actually yellow, but standard error is too small) we have completely rational Bayesian agents that don’t undergo evolution. Line thickness represents standard error from averaging 10 runs. If we plot the general self-absorption versus the discrete standard Bayesian vs. quasi-magical thinking case, then the results are qualitatively the same, but quantitatively less pronounced.

Unsurprisingly, the evolutionary flexibility of having genetic access to both subjective utilities and quasi-magical thinking produces a curve that best approximates the ideal transition from all cooperation (maxing at 0.9 because of shaky hand) to all defection (minimizing at 0.1 because of shaky hand) at . However, it is interesting to see that subjective utility and quasi-magical thinking on their own relax the transition in different directions. In particular, quasi-magical thinking on its own cannot achieve the expected levels of cooperation in the highly specialized regime, and subjective utility on its own takes longer to transition to all defection in the unspecialized regime. Further, the distribution of subjective utilities in the co-evolutionary and purely Bayesian cases in surprisingly similar, suggesting that subjective utilities are responding to an evolutionary pressure independent of (or maybe just significantly dominant over) the pressures on self-absorption.

To get a quantitative grasp of these transitions, it would be worthwhile to fit a sigmoid to them as I’ve done before for understanding cognitive cost of ethnocentrism. Concretely, this means for each of the four cases in the figure above finding parameters p and s such that:

minimizes least squares error against the collected data. The parameter p will tell us where the phrase transition is happening (so we expect values near 0.5) and s will telling us how sudden it is, with meaning very sudden and meaning very slowly. For example, from eyeball inspection it looks like the phase transitions in the green and blue lines are starting slightly after the expected 0.5 point, hinting that the quasi-delusional agents might sustain cooperation for slightly longer than evolution of fixed pure strategies. However, I will save these explorations for next week, maybe with tests on random graphs of higher degree.

References

McNamara, J.M. (2013). Towards a richer evolutionary game theory. Journal of the Royal Society, Interface, 10(88).

Masel, J. (2007). A Bayesian model of quasi-magical thinking can explain observed cooperation in the public good game. Journal of Economic Behavior and Organization, 64 (2), 216-231 DOI: 10.1016/j.jebo.2005.07.003

Ohtsuki H, & Nowak MA (2006). The replicator equation on graphs. Journal of Theoretical Biology, 243 (1), 86-97.