Computational complexity of evolutionary stable strategies

October 10, 2013 by Artem Kaznatcheev

For all we have , and If then .

Yesterday, I shared a video of John Maynard Smith introducing evolutionary game theory (EGT) to the London Mathematical Society . I suggested that at its foundation, EGT was just like classical game theory, and based on equilibrium analysis — the evolutionary stable strategy (Maynard Smith & Price, 1973). Given a utility functionthat gives the expected payoff for mixed strategyinteracting with mixed strategy, we say thatis an(ESS) if:

Or we look at the contrapositive: a strategy can invade a host strategy if (1) it has a better payoff in a population of s than s have against themselves, or (2) if they are neutral then a small population of benefits themselves more than they benefit the host population. If some strategy has no invading strategy then it is an evolutionary stable strategy.

If you are an observant reader and familiar with classical game theory then you probably noticed that the first condition of the direct definition is equivalent to the definition of a Nash equilibrium (NE). The second clause joined with an “and” means that every NE is an ESS, but not every ESS is an NE. However, that second condition seems innocuous enough, surely it can’t change the the qualitative nature of the equilibrium concept?



It does — it changes everything. The second condition transforms the of NE into of ESS, and that is not a minor — it is the difference between closed and open sets in topology. Without , we can’t use Brouwer’s fixed point theorem anymore, and thus can’t guarantee the existence of an ESS. Although a NE exists in every game, you can already find examples of games without an ESS just by looking at the Rock-Paper-Scissors game with +1 for wins, -1 for losses, and a small penalty of for ties. This change to strict inequalities also makes the computational aspects of ESS much more difficult than the corresponding concepts for NE.

If we switch to the dynamic point of view and look at the replicator equation (Taylor & Jonker, 1978) then — as is so often the case in simple models of evolution — the problem starts to look like optimization. In particular, since we care about the behavior of strategies playing against themselves (as opposed to a static fitness landscape), we start to get equation that are quadratic in the weights of agent’s mixed strategies. It is unsurprising that replicator dynamics becomes closely related to quadratic programming which is NP-hard; it becomes possible to encode problems like the maximum clique problem in the game payoff matrix and use evolution to solve it (Bomze et al., 2000; Pelillo & Torsela, 2006). However, this is only a lower bound on the complexity of finding an ESS.

Etessami & Lochbihler (2008; preprint appeared in 2004) looked at the complexity of answering if a given game has an ESS. They showed that the problem is both NP-hard and coNP-hard, and inside (NP with a coNP oracle). In fact, Nisan (2006) showed that given a game and a candidate ESS, it is already coNP-hard to check if the candidate is in fact ESS or not. Recently, Conitzer (2013) finished this direction by showing that the existence of an ESS is -complete by a reduction from the Minmax-clique problem (Ko & Lin, 1995).

I am not sure of the consequences of this for biology, since the question of ESS existing is not a central biological problem. The bigger problem is if an ESS exists, can we find it in a reasonable amount of time? These result tell us that there is no simple classification of games into ones that have or don’t have an ESS, but they say nothing about the complexity of finding the equilibrium if one exists. Since we can’t have simple necessary and sufficient conditions for ESS, I would be interested in looking at the complexity of finding equilibrium in classes of games that satisfy some nice sufficient conditions. Will it be PPAD-complete like NE (and similar to the PLS-completness of the NK-model of static fitness landscapes)? Or will it be more difficult? Easier? What is we further restrict the games to random graphs?

References

Bomze, I. R., Pelillo, M., & Stix, V. (2000). Approximating the maximum weight clique using replicator dynamics. Neural Networks, IEEE Transactions on, 11(6): 1228-1241.

Conitzer, V. (2013). The exact computational complexity of evolutionarily stable strategies. The 9th Conference on Web and Internet Economics (WINE).

Etessami, K., & Lochbihler, A. (2008). The computational complexity of evolutionarily stable strategies. International Journal of Game Theory, 37(1): 93-113. First available in 2004 in ECCC Tech Report TR04-055.

Ko, K.I., & Lin, C.L. (1995). On the complexity of min-max optimization problems and their approximation. Nonconvex optimization and its applications, 4: 219-240.

Maynard Smith, J., & Price, G.R. (1973). The logic of animal conflict. Nature, 246: 15-18.

Nisan, N. (2006). A note on the computational hardness of evolutionary stable strategies. In Electronic Colloquium on Computational Complexity (ECCC) 13(076).

Pelillo, M., & Torsello, A. (2006). Payoff-monotonic game dynamics and the maximum clique problem. Neural Computation, 18(5): 1215-1258.

Taylor, P.D. & Jonker, L. (1978). Evolutionary stable strategies and game dynamics. Math. Biosci., 40: 145-156.