I have reason to believe that few people understand genetic load very well, probably for self-referential reasons, but better explanations are possible.

One key point is that the amount of neutral variation is determined by the long-term mutational rate and population history, while the amount of deleterious variation [genetic load] is set by the selective pressures and the prevailing mutation rate over a much shorter time scale. For example, if you consider the class of mutations that reduce fitness by 1%, what matters is the past few thousand years, not the past few tens or hundreds of of thousands of years.

There is a recent article in BMC genetics that illustrates this. They found that rhesus macaques are three times as diverse as humans, surely because of a larger effective population size. But when you look at nonsynonyous mutations, macaques have only 1.2 times as many as humans. Now probably some of those nonsynonymous mutations are actually harmless, but most must actually be deleterious – else why are they relatively scarcer than neutral variation? Furthermore, in macaques, a smaller fraction of those nonsynonymous mutations seem likely to be damaging than in humans, so the actual number of deleterious mutations in macaques may not be much different than the human average.

So, assuming that African populations have more neutral variation than non-African populations (which is well-established), what do we expect to see when we compare the levels of probably-damaging mutations in those two populations? If the Africans and non-Africans had experienced essentially similar mutation rates and selective pressures over the past few thousand years, we would expect to see the same levels of probably-damaging mutations. Bottlenecks that happened at the last glacial maximum or in the expansion out of Africa are irrelevant – too long ago to matter.

But we don’t. The amount of rare synonymous stuff is about 22% higher in Africans. The amount of rare nonsynonymous stuff (usually at least slightly deleterious) is 20.6% higher. The number of rare variants predicted to be more deleterious is ~21.6% higher. The amount of stuff predicted to be even more deleterious is ~27% higher. The number of harmful looking loss-of-function mutations (yet more deleterious) is 25% higher.

It looks as if the excess grows as the severity of the mutations increases. There is a scenario in which this is possible: the mutation rate in Africa has increased recently. Not yesterday, but, say, over the past few thousand years.

It takes a long time to change the frequency of deleterious mutations of small effect. A change in selective pressures, or in the mutation rate, can change the frequency of deleterious mutations of large effect much more rapidly. So it is perfectly possible for a population to simultaneously have a lower-than-average level of small-effect mutations and a higher-than-average level of moderate to severe mutations, or vice versa.

What is the most likely cause of such variations in the mutation rate? Right now, I’d say differences in average paternal age. We know that modest differences (~5 years) in average paternal age can easily generate ~20% differences in the mutation rate. Such between-population differences in mutation rates seem quite plausible, particularly since the Neolithic.

What about the various comments in the coverage of Decode’s work about how harmless a higher mutation rate must be, usually referring to the past couple of centuries or so? They’re wrong, although not as spectacularly as Stefánsson . He reminds me of that old Saturday Night Live skit about how ‘inflation is your friend!”. Wouldn’t you like to own a $4,000 suit, and smoke a $75 cigar, drive a $600,000 car? I know I would ! In the same way, mutation is your friend. Doesn’t everyone want to be one of the X-men? Sheesh, we should restart atmospheric nuclear testing immediately, the bigger the better.