The IPCC AR5 Report included this diagram, showing that climate models exaggerate recent warming:

If you want to find it, it’s figure 11.25, also repeated in the Technical Summary as figure TS-14. The issue is also discussed in box TS3:

“However, an analysis of the full suite of CMIP5 historical simulations (augmented for the period 2006–2012 by RCP4.5 simulations) reveals that 111 out of 114 realizations show a GMST trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble (Box TS.3, Figure 1a; CMIP5 ensemble mean trend is 0.21°C per decade). This difference between simulated and observed trends could be caused by some combination of (a) internal climate variability, (b) missing or incorrect RF, and (c) model response error.”

Well, now there is a new generation of climate models, imaginatively known as CMIP6. By a remarkable coincidence, two new papers have just appeared, from independent teams, giving very similar results and published on the same day in the same journal. One is UKESM1: Description and evaluation of the UK Earth System Model, with a long list of authors, mostly from the Met Office, also announced as a “New flagship climate model” on the Met Office website. The other is Structure and Performance of GFDL’s CM4.0 Climate Model, by a team from GFDL and Princeton. Both papers are open-access.

Now you might think that the new models would be better than the old ones. This is mathematical modelling 101: if a model doesn’t fit well with the data, you improve the model to make it fit better. But such elementary logic doesn’t apply in the field of climate science.

The main “feature” (bug?) of the new models is their high climate sensitivity. Recall that the IPCC says that equilibrium sensitivity is 1.5 – 4.5C, a range that hasn’t changed in 30 years. The Met Office paper comes up with a figure of 5.4C, and the GFDL group say about 5C, so they are both way outside the IPCC range. Of course, the useful idiots in the media are lapping this up and saying that the earth is warming more quickly than thought, which of course isn’t even what the papers are claiming.

Given that the previous models were running too hot, as shown in the IPCC graph above, and the new ones have a much higher sensitivity, the obvious question is how well do the new models do at reproducing the 20th century? Well, you have to wade through the UKESM1 paper to find the answer to the question, but eventually you get to this in figure 29, showing the new model compared with HadCRUT4 from 1850 – present:

(The GFDL paper has very similar graph in fig 12). The model shows recent warming that is vastly greater than the observations. Clearly the model is far too sensitive. So what do the authors of the paper say about this? Believe it or not, they claim this, right at the top of the paper in the “key points” section:



* UKESM1 performs well, having a stable pre-industrial state and showing good agreement with observations in a wide variety of contexts.

They then repeat this falsehood in the abstract, claiming good agreement exactly where the agreement is particularly bad:

Overall the model performs well, with a stable pre-industrial state, and good agreement with observations in the latter period of its historical simulations

and then repeat it yet again in the “plain language summary” below that. Presumably this lie is designed to be regurgitated by the clueless media.

Even alarmist climate scientist and BDS-sufferer James Annan is scoffing at the paper’s claims, suggesting that it should say “UKESM1 does a great job at everything other than its primary function”.

Are these Met Office climate muddlers really so self-deluding that they think it shows good agreement? Yet again, Feynman deserves the last words: