Rasmus & Gavin

The reason why scientists like going to conferences (despite them often being held in stuffy hotel basements) is because of the conversations. People can be found who know what they are talking about, and discussions can be focused clearly on what is important, rather than what is trivial. The atmosphere at these conferences is a mix of excitement and expectations as well as pleasure at seeing old friends and colleagues.

The two of us just got back from the excellent ‘Open Science Conference‘ organised by the World Climate Research Programme (WCRP) in Denver Colorado. More than 1900 scientists participated from 86 different countries, and the speakers included the biggest names in climate research and many past and present IPCC authors.

The focus of the conference was on how climate research should be done in order to be of service to society. Hence, a fair bit of focus was given to how to create useful climate information, or ‘actionable science’. This is supposed to be part of a global framework for climate services (abbreviated as GFCS to make it a bit more cryptic). But there is a lot of discussion about exactly what form of information this would be, and there were many (not necessarily exclusive) ideas around.

Bruce Hewitson and David Behar both made strong cases that the context in which climate-related information was received was key to its utility. On a panel discussion with various ‘stakeholders’, a good argument was made that better ‘translation’ of climate information was a priority, and that perhaps the current demand is not for more science results. Perhaps the information we have about climate already is good enough for many purposes.

However, communicating that understanding to people who might benefit from it remains a work in progress. Communication involves an end-to-end two-way process, as opposed to simply sending off a message hoping that the recipient will understand. There are also ethical concerns linked to the context – what are the consequences of an incorrect forecast? Are there inequities in who benefits and who doesn’t? Scientists, on their own, are not necessarily well-equipped to deal with this.

There is an expectation that the climate services should include seasonal-to-decadal forecasts, and that the climate information should be ‘seamless’ in terms of time scales as well as spatial scales. The tools for making such forecasts, however, are far from validated, and there are large gaps in our knowledge about whether (and where) predictability for the next few seasons or decades can be found. Part of our knowledge gap stems from insufficient long series of measurements from vital areas, however, the non-linear complexity also makes it difficult to discern any precursor signals. There are some potential leads though: sea-ice, snow-cover, soil moisture, ocean temperatures and the state of the stratosphere all seem to give some contingent initial condition predictability over that expected for standard weather forecasts. The extent to which this will be useful is still unclear.

One session was dedicated to observational data where many people stressed the need to ensure that current observational platforms are sustained, and better yet, that a degree of redundancy be added as quality control. For instance, overlaps between old satellite missions ending their life and new missions can improve the inter-calibration of the results. Unfortunately, the impact of not doing this can end up as big jumps in the reanalyses as data sources come in and out. Kevin Trenberth gave a great overview on what was, and what was not robust across the multiple reanalyses products now available (see our previous post for more details on this).

There were two interesting topics that revealed some tension among participants: the future of climate modelling, and the attribution of extreme events (they are of course connected, but that wasn’t really the issue).

Christian Jakob argued for a return to the ‘foundations’ of climate modelling (which he implied was atmospheric physics) and a concerted effort to develop an internationally funded super-model that would be significantly better resourced than any existing climate model development efforts. This was a contrast to the presentation from Sandrine Bony on the CMIP5 multi-model ensemble which made a point of celebrating the diverse nature of the models and the gains that come from that. Post-talk discussions on these two views were ‘spirited’ (and not just because of the cash bar).

The other topic that exercised people’s conversational ability was to what extent and how extremes might be attributable to climate change. Peter Stott gave a good overview of recent attempts, and clearly favored the establishment of better tools and institutions to do this on a more operational basis.

Almost all of the speakers have written position papers, so even if you weren’t there, you can get a sense of the discussion. The conference also offered a Facebook, Twitter and LinkedIn pages for social media access.

The contrast between the conversations in this meeting and what passes for serious issues in the media and blogosphere was very clear.