$\begingroup$

I've been working on on introducing some results from computational complexity into theoretical biology, especially evolution & ecology, with the goal of being interesting/useful to biologists. One of the biggest difficulties I've faced is in justifying the usefulness of asymptotic worst-case analysis for lower bounds. Are there any article length references that justify lower bounds and asymptotic worst-case analysis to a scientific audience?

I am really looking for a good reference that I can defer to in my writing instead of having to go through the justifications in the limited space I have available (since that is not the central point of the article). I am also aware of other kinds and paradigms of analysis, so I am not seeking a reference that says worst-case is the "best" analysis (since there are settings when it very much isn't), but that it isn't completeletely useless: it can still gives us theoretically useful insights into the behavior of actual algorithms on actual inputs. It is also important the writing is targeted at general scientists and not just engineers, mathematicians, or computer scientists.

As an example, Tim Roughgarden's essay introducing complexity theory to economists is on the right track for what I want. However, only sections 1 and 2 are relevant (the rest is too economics specific) and the intended audience is a bit more comfortable with theorem-lemma-proof thinking than most scientists[1].

Details

In the context of adaptive dynamics in evolution, I've met two specific types of resistance from theoretical biologists:

[A] "Why should I care about behavior for arbitrary $n$? I already know that the genome has $n = 3*10^9$ base pairs (or maybe $n = 2*10^4$ genes) and no more."

This is relatively easy to brush-off with the the argument of "we can imagine waiting for $10^9$ seconds, but not $2^{10^9}$". But, a more intricate argument might say that "sure, you say you care about only a specific $n$, but your theories never use this fact, they just use that it is large but finite, and it is your theory that we are studying with asymptotic analysis".

[B] "But you only showed that this is hard by building this specific landscape with these gadgets. Why should I care about this instead of the average?"

This is a more difficult critique to address, because a lot of the tools people commonly use in this field are coming from statistical physics where it is often safe to assume a uniform (or other specific simple) distribution. But biology is "physics with history" and almost everything isn't at equilibrium or 'typical', and empirical knowledge is insufficient to justify assumptions about distributions over input. In other words, I want an argument similar to that used against uniform distribution average-case analysis in software engineering: "we model the algorithm, we can't construct a reasonable model of how the user will interact with the algorithm or what their distribuition of inputs will be; that is for psychologists or end users, not us." Except in this case, the science isn't at a position where the equivalent of 'psychologists or end users' exists to figure out the underlying distributions (or if that is even meaningful).

Notes and related questions