by Judith Curry

This post discusses Workshop presentations on the utility of climate models for regional adaptation decisions.

This post is a follow-on to the two previous posts:

This post highlights two presentations on climate modeling.

Tim Palmer – Oxford University: On seamless prediction and the reliability of climate forecasts



Palmer introduced 5 categories of forecast reliability in context of a reliability diagram: 5 – perfect; 4 – still very useful for decision making; 3 – marginally useful; 2 – not useful; 1 – dangerously useless. Examples were given of regional variations in reliability in context of ECMWF seasonal forecasts.

Are regression lines of reliability diagrams from seasonal predictions useful to calibrate low-resolution climate change projections? Preliminary results show that that such regional calibration of climate change projections of precipitation do improve the skill of lower-resolution climate model predictions.

This talk provided an excellent example of the utility of seamless weather-to-climate model predictions, whereby the predictions of short-term high-resolution simulations are used to improve the longer-range low resolution climate model simulations.

In his verbal comments and in discussion, Tim Palmer argued strongly and eloquently for the level of research commitment required to meet the enormous challenges of predicting regional climate, a challenge that he argued was greater than that associated with identification of the Higgs boson.

Leonard Smith – London School of Economics: The user made me do it: seamless forecasts, higher hemlines, and credible computation

Well Lenny certainly gets first prize for best title. His first slide includes the statement that caught the attention of Uncertain T. Monster: ‘It’s OK to say that we know we don’t know.’ Smith argued that climate models were not up to the standards in relation to other computational fluid dynamics groups in terms of validation, verification, uncertainty quantification, and transparency, stating ‘Trust can trump uncertainty.’

Smith describes the following limits to transparency: dangerously schematic schematics; showing anomalies versus a real-world quantity (which can hide systematic error that are larger than the observed anomalies); equidismality (rank order beauty contests of climate models, without comparison of some absolute measure of quality); buried caveats; burying the bad news about model performance.

Smith provides the following summary of the demonstrated value of weather/climate forecasts as a function of lead time:

Medium-range: Significant, well into week two +

Seasonal: Yes, in some months and regions

Decadal: Not much (global and regional)

With regards to using global climate modes for regional climate variability: “In the long range, simulations which require significant statistical adjustment (or variance inflation) at global scales are NOT rational candidates for local use (e.g. dynamical downscaling.) This is a key point that was brought out in discussion: Smith’s presentation basically threw under the bus the widespread practice of dynamical downscaling of global climate model projections using high resolution regional models.

Regarding the credibility of computation. In climate-like tasks, the impossibility of experiment places heavier burden on analysts. The burden of demonstrating relevance and credibility is squarely on the analyst’s shoulders. In my experience, across all of CFD, climate modelling has been the slowest scientific field to embrace this responsibility, whereas other fields are embracing insightful parallels for structural model inadequacy.

Smith argues that the climate community has oversold climate models. ‘How do we ease user pushback when the current oversell becomes clear?’ He then asks the questions: Can (we) climate modelers stop digging? Information we are supplying which is not ‘adequate for purpose’ is being interpreted as if it was. A wave of valid criticism of the presentations and interpretation of models may well come from physics, statistics and even (has already come from) honest policy-maker IPCC –questions. The political/public interpretation might be that the anti-science lobby was right in the first place. How do we clarify limits of our understanding on more favourable terms?



Take home points:

There is no rational justification to using “probability distributions” derived from climate model diversity as decision-relevant probabilities The tools of Decision Theory 101 do not apply. Vulnerability approaches do not require them No probability forecast is complete without an estimate of its own irrelevance. The values of model simulations must be quantified outside model-land (and will vary with the space and time scales of the task and the lead time relevant to the decision). Solid insights of climate science may be obscured if the severe limits on our ability to see the details of the future even probabilistically are not communicated clearly. We need to better distinguish “valid methodology” from “useful tool”, and avoid waffle (“yes, but when the signal comes out of the noise”). When global statistics of GCM distributions require significant correction or inflation, dynamical downscaling is a nonsense. We can provide seamless forecasts, expose hemlines due do our limited understanding, and support real user needs with more credible computation. Aiming for true transparency and engagement. Other scientific disciplines are doing this now even if they were not when climate modelling started!

In terms of philosophy of climate modeling, there is much in Smith’s presentation that is not included in this summary, along with a useful reference list.

JC reflections

These two presentations were fascinating for someone like me, who both uses climate model outputs and also philosophizes about climate models. The reactions of the non-climate scientists in the audience was that while they didn’t quite follow the presentations, that this was the first real criticisms of climate models that they had encountered (outside of something that could be dismissed from the ‘anti-science’ crowd).

The limits of climate models for regional adaptation decision making was also highlighted in several other talks (that will be discussed in Part V). This led Brian Hoskins to speak up in defense of climate models, which he introduced by saying that normally he finds himself criticizing climate models. He felt the impression left on decision makers were that climate models are useless for regional adaptation decisions, and was concerned that we were throwing out the baby with the bathwater. There are two pre-conceived notions that can color how you view this discussion on climate models: one is that regional decision makers are continuing to use the historical record to drive decision making (e.g. the historical 50 year flood) which does not account for climate change; the second is that international bodies (e.g. the UNFCCC) seem to be operating in a climate model command-and-control mode.

In the discussion, all agreed that global climate models were potentially very valuable and could be made more useful for adaption decision making, but that this potential was not yet realized. Apart from the important points made by Tim Palmer regarding the importance and challenges of improving climate models, the following strategies would increase the utility of climate models to support adaptation decisions:

Increase the size of the ensemble Use the climate models to explore possible future scenarios that extend beyond emissions: e.g. solar forcing, volcanic eruptions. Develop improved strategies to extract useful information from the climate model simulations, such as suggested by Tim Palmer Sensitivity studies that help improve the fidelity of decadal simulations and also help improve understanding of model limitations and uncertainty.

Part V (stay tuned) will present some alternative data-driven (and model-data) approaches to supporting regional adaptation decision making.