Replicator dynamics of cooperation and deception

July 18, 2013 by Keven Poulin

In my last post, I mentioned how conditional behavior usually implied a transfer of information from one agent to another, and that conditional cooperation was therefore vulnerable to exploitation through misrepresentation (deception). Little did I know that an analytic treatment of that point had been published a couple of months before.

McNally & Jackson (2013), the same authors who used neural networks to study the social brain hypothesis, present a simple game theoretic model to show that the existence of cooperation creates selection for tactical deception. As other commentators have pointed out, this is a rather intuitive conclusion, but of real interest here are how this relationship is formalized and whether this model maps onto reality in any convincing way. Interestingly, the target model is reminiscent of Artem’s perception and deception models, so it’s worth bringing them up for comparison; I’ll refer to them as Model 1 and Model 2.



We’ll consider the good ol’ Prisoner’s Dilemma, and while UV-parameterization has become customary on this blog, all models here are described in terms of benefit and cost so we’ll stick with that for sake of simplicity.

Model 1 is probably the most general. We have an infinite, well-mixed population of pure Cooperators (C) who cooperate with everyone; pure Defectors (D) who always defect; perceivers (CI) who try to cooperate with other cooperators only; and deceivers (DI) who manage to be perceived as cooperators despite the fact that they always defect. Perceivers and deceivers are more sophisticated, but all that awesomeness comes at added costs and respectively.

Under such conditions, as long as , pure defection remains the only ESS. So Model 2 released the pressure on CI and removed deceivers altogether. Furthermore, instead of having CI pay a constant cost , CI pay a different cost compared to C. This model yielded rock-paper-scissors-like dynamics of orbits around an internal fixed point.

Notice that in these two models, strategies were perfectly carried out, i.e. deceivers always deceived successfully and perceivers always spotted other cooperators. The target model introduced a level of uncertainty: CI, D and DI interact as above, but CI erroneously cooperate with D in a proportion of their interactions and DI successfully deceives CI in a proportion . Deceivers still pay a constant cost but cooperators are allowed their amazingness for free. The payoffs are the following:

Assuming , , and , we consider two cases illustrated below: (a) and (b) . In both cases, there is bistability between CI and D, but in the former condition, the stability of the CI equilibrium is vulnerable to invasion by DI. With large frequencies of DI, D becomes a better strategy since it doesn’t have to pay the deception cost , and so it invades. In the latter condition, both CI and D are Nash equilibria and the presence of DI can only be, again, transient.

How could DI get a shot at stability? The authors extend this model by having vary with the frequency of tactical deceivers in the population. As the frequency of deceivers goes up, they argue, we can expect that CI will get better at detecting them, and so will go down. Makes sense. A very straight-forward definition of could thus be . In that case, if most players are deceivers, then cooperators will almost always detect them. Conversely, if there are very few of them, deception will almost always be successful.

Under these conditions, a rare DI can invade a population of CIs (given that ). A rare CI can also invade a population of DIs if . This parameterization thus yields a mixed equilibrium between DI and CI, which is furthermore stable against invasion by D if , where is the frequency of CI at equilibrium.

McNally & Jackson make no assumptions as to the mechanisms behind conditional cooperation and deception: simply, cooperators favour cooperation with other cooperators and deceivers use tactics (e.g. hide, manipulate reputation) to appear as cooperators. Based on this theoretical work, the authors hypothesize that as individuals take part in more cooperative games, there will be increased selective pressure for tactical deception. They back up this hypothesis with data from field observations of non-human primates taken from datasets obtained through ISI Web of Science. Deception behaviours were “acts from the normal repertoire of the agent, deployed such that another individual is likely to misinterpret what the acts signify, to the advantage of the agent”, a definition taken from Byrne & Whiten (1990). Furthermore, as they point out, though all deception implies some form or other of misrepresentation, tactical deception is implicitly context-dependent.

The data on cooperation included instances of coalition formation, food-sharing and alloparenting, and controlled for neocortex ratios and research effort (talk about rigor!). I invite you to look up the paper for their results, because the correspondence between the data and their theoretical prediction is striking. I would nonetheless reserve my enthusiasm until I get a better description of the parameters used to produce this prediction. Intuitively, in order to produce a prediction, they’ve had to come up with values for costs and benefits, and I’d be interested in knowing how that was done. In any case, they are very cautious in warning the readers not to interpret their graphs as definitive evidence given the subjective nature of the raw data.

As it is, I think the model lacks at least one parameter to be fully consistent. Indeed, while the possibility of false positive in the identification of cooperators (when CI erroneously cooperate with D) is accounted for, there is no allowance for the possibility of false negatives, i.e. when a cooperator would fail to recognize another cooperator. This possibility becomes all the more worrying if we let this error rate vary in a frequency dependent manner, which would be reasonable given their treatment of the parameter . In fact, the logic is the same: if we expect deception to be more readily detected when deception is the norm, we can likewise expect cooperators to be more readily mistaken for deceivers. This reminds us how powerful deception can be: long after the last liar has disappeared, trust will suffer its effects.

I find it difficult to talk about issues of deception without ending up talking about communication evolution and stability. Yes, cooperation likely selects for better deception, but deception likely selects for better detection, or simply for more sophisticated communication: that gives us the much-talked-about evolutionary arms race.

References

Byrne, R.W. & Whiten, A., (1990). Tactical deception in primates: the 1990 database. Primate Report, Whole. 27, 1–101.

McNally L, & Jackson AL (2013). Cooperation creates selection for tactical deception. Proceedings of the Royal Society B: Biological Sciences, 280 (1762) PMID: 23677345