A rapid near-time analysis of the UK’s record-breaking wet December in 2015 suggests that climate change increased the odds of the exceptionally high rainfall by 50-75%.

Carbon Brief takes a look at the research and quizzes the experts on the science behind attributing extreme weather events to human-caused climate change.

Warmest and wettest

Last week, the Met Office announced that December 2015 was the UK’s wettest on record, seeing more rain than any other month since 1910. It was also the UK’s warmest December over the same period.

A series of storms – first Desmond, then Eva, and finally Frank – dumped 230mm of rain on the UK during December, triggering flooding across much of Scotland, northern England and Northern Ireland.

After some rapid number-crunching, scientists at the University of Oxford and the Royal Netherlands Meteorological Institute (KNMI) have assessed the role climate change played in last month’s extreme weather.

The preliminary results – from three different approaches – indicate the human impact on climate was as large, or even larger, than the impact of natural fluctuations in the Atlantic and Pacific ocean – even during a strong El Niño event.

Climate change and ocean variability each made the record rainfall totals 50-75% more likely, the researchers say, and doubled the chances of such a warm month. Random variability in weather also contributed to the severe conditions.

You can find more details on the findings, which haven’t yet been peer-reviewed, on the climateprediction.net website.

The researchers have used the three approaches in a number of attribution studies before, including earlier research into Storm Desmond – where they found the exceptional rainfall was 40% more likely because of climate change – and a paper on the recent Brazil drought, where they found that climate change had not made the dry spell more likely.

Q&A

Following the announcement of these results, Carbon Brief spoke to two of the scientists behind the research: Prof Myles Allen, a professor of geosystem science, and Dr Friederike Otto, a senior researcher. Both are at the Environmental Change Institute (ECI) at the University of Oxford – a leading centre for attribution research.

We begin with a question on how scientists choose which extreme weather events to study…

Carbon Brief: How do you decide which extreme events to look at? Is it more down to the resources available to you (time, computing power, etc) or the events themselves (location, severity, whether they’re in the news)?

Myles Allen: The World Weather Attribution project – led by Climate Central in the US, with key partners including ECI, KNMI in the Netherlands, the University of Melbourne, and the Red Cross Red Crescent (RCRC) – tries to do what it says on the tin: world weather attribution. So we aim to select events on the basis of impact, and get excellent input from RCRC on this, because they maintain global databases on the humanitarian and economic impacts of all kinds of natural disasters. That said, clearly we don’t have equal capabilities everywhere, although we are working with partners in vulnerable regions of the world to improve that, so right now there is obviously a bias towards our own back yards: Northwest Europe, Australia and New Zealand.

One thing we don’t do is select events on the basis of whether we think climate change made them more likely to occur. I think, as a group, we have probably published as many null or negative results as we have positive attribution statements, although there is probably a tendency for the positive statements to get more publicity. My personal view is that, in the long term, attribution should be a routine part of any package of climate services, so a quantitative assessment of how various external drivers may be making weather events more or less likely to occur should become just part of the job of the world’s meteorological services. We currently get a lot of qualitative hand waving about how various drivers may have contributed, leaving the public pretty much in the dark (or worse, guided by which papers or websites they read rather than the evidence) about which drivers are most important.

This last December is a case in point: everyone who was prepared to listen probably got the message that both natural ocean variability, including variations in the Atlantic and the El Niño event in the Pacific Ocean, and possibly human influence on climate, contributed to our warm, wet December. But which was more important? Was human influence a tiny effect compared to El Nino, or a substantial one? Our preliminary results, released today [see above], suggest that the role of human influence on climate was as large or larger than the influence of these patterns of ocean variability, but that random and unpredictable atmospheric weather noise played an important role as well. That puts these influences into context, and helps people understand what is important. This kind of quantitative assessment could be routine; I firmly believe it should be routine; and I’m happy to say that a lot of Met Offices, including our own in the UK, are moving fast in this direction.

CB: Similarly, is it harder to attribute a single event – such as Storm Desmond – or a series of events – such as the UK’s wet and warm December?

MA: It depends on the type of event. Very generally speaking, we can normally say things with more confidence about longer timescales and larger spatial scales, simply because the models we use have limited temporal and spatial resolution. But there are exceptions: in some ways what is happening to daily rainfall anomalies in Northwest Europe may be simpler than what is happening to seasonal anomalies, because it is more closely tied to the thermodynamics of a warming atmosphere.

It also depends on the method used for attribution. In Oxford, we rely on physically-based meteorological models. So clearly our confidence in our attribution statements is limited by how realistically these models simulate the processes that contribute to the event of interest. A particularly important issue in Northwest Europe is the representation of atmospheric blocking and the jet stream, because a small shift in jet location or change in blocking frequency can have a big impact on the risks of extreme weather events. Our current model does a relatively good job simulating the statistics of the jet stream compared to other current climate-resolution models, but a higher-resolution model would do better, and this is definitely a direction we would like to move in, resources permitting.

Other groups, for example KNMI, use statistical analysis of observed records, so their confidence is often limited by sample size, and for them, short-duration events are generally easier because the samples are larger.

CB: Are the numbers always positive? Is climate change making anything less likely? Have you ever got a zero result?

MA: Absolutely not; always positive, that is. Climate change is making many events less likely to occur, some of which can still occur by chance, others of which don’t occur, but their non-occurrence is still important economically, because of the value of the damage they don’t do. An example of an event that did occur would be the exceptionally cold UK December of 2010. Both we and the Met Office concluded that it was made less likely by climate change. An example of an event that didn’t occur would be a hypothetical spring-time flood in England in 2001: this was the focus of a study cited in the last IPCC report, which found that such a flood had become less likely because spring floods tend to be triggered by the rapid melting of accumulated snow – a sequence of events made less likely by climate change. There are also plenty of events where we can’t tell whether climate change is having any impact at all, and others where we simply don’t have the tools yet to say for sure either way.

CB: How do you actually do an attribution study? What are the steps?

MA: Our experiments are very simple in principle. We run a global atmospheric model thousands of times driven with sea surface temperatures and atmospheric composition representing the world as it is today, and then repeat with these “boundary conditions” modified to represent a “world that might have been” in the absence of human influence on climate, and compare the statistics of extreme weather events between these two ensembles. We need thousands of runs because we are generally interested in relatively rare events. To detect, say, a doubling of the odds of a one-in-one-hundred-year event, you need to be comparing multi-thousand-member ensembles. And we also need to allow for uncertainty in the pattern of human influence, which means we need to explore even more options.

The only way we can run these ensembles is by enlisting the help of the general public, using spare processing capacity on volunteers’ computers – we also think this is the most environmentally-friendly way of doing this, because it means we don’t need an air-conditioned hangar to house all the necessary processors. We are, of course, deeply grateful to all the volunteers who have given so much computing time to the project over the years. I’m pretty sure, in terms of raw processing throughput, we remain the world’s largest climate modelling project – certainly in terms of number of model-years simulated per month.

CB: There seems to be a broad range in results – some studies say the odds of certain events increase by 25% or 40%, while others finds events are, say, seven times more likely – why is this? How high can we expect these numbers to get?

MA: That’s just the way it is: it depends on the event. Human influence is making some events much more likely, others a bit more likely, and still others less likely. It is very important we don’t just focus on the events that have been made much more likely, because a small increase in the risk of very high-impact events could be just as important or more so.

Eventually, we may start to see events that simply could not have occurred at all in the absence of human influence on climate, so I guess for such an event one would have to say it was made infinitely more likely to occur. But for most of the short-duration, localised events that most people think of as weather, that point is a very long way off indeed. And I’m not sure it really matters anyway whether an event has been made 50 times or 1000 times more likely by human influence, since that number would be almost entirely dependent on your estimate of how unlikely it would have been in a world without human influence, which is a bit of a moot point.

CB: Finally, recent findings from Oxford on recent extreme weather in the UK have been published online ahead of being peer-reviewed for an academic journal – why is this? Is there a danger that findings or conclusions will change once the work is peer-reviewed?

MA: Obviously there is a compromise to be reached between providing numbers to the public when we have them available – particularly, in our case, when the public helped us generate those numbers in the first place – and holding numbers back until they have gone through the peer-review process. We only publish numbers based on peer-reviewed methods and models, and submit our most interesting results for peer-review as a matter of course. As I said before, in the end, attribution should be as much part of a comprehensive suite of climate services as, for example, the seasonal forecast. Individual seasonal forecasts aren’t subject to peer-review before they are issued, but seasonal forecasting methods certainly are – at least in academically responsible forecasting centres like the Met Office – and very often the most interesting individual forecasts provide case studies for subsequent papers. I think we should see attribution results in much the same way.

Friederike Otto: I think it is important to note that when we publish studies before peer-review, we use methodologies that have been peer-reviewed, and we are trying to use more than one methodology to assess the confidence in our results. Also, the question of whether or not anthropogenic climate change played a role in the extreme event gets asked when the event happens and someone will answer these questions. So, as long as we make clear what we do, what our assumptions are, how we define the event, etc, it would probably not be bad for the public debate around the event if someone like us – in the sense of the attribution community – who can provide scientific evidence gives these answers, even if preliminary. Of course, only in the cases where our tools and methods allow for a robust answer.