April 20, 2013 — andyextance

If you want to plan for the future, or even for the present, knowing that our climate is changing, what’s the best way to do it? That’s a question that David Stainforth from the London School of Economics, Sandra Chapman from the University of Warwick and Nicholas Watkins from the British Antarctic Survey have puzzled over. And while David is co-founder of the climateprediction.net project that borrows spare time on peoples’ computers to run climate models, he doesn’t feel that models are always the best source of information.

“It’s clear to me that the detailed local information on how climate is changing, and what it will be like in 2050, can’t be had from climate models today,” David told me. “They’re just not that good. And yet I work a lot with the adaptation and impacts community, who are interested in what’s happening ‘here’, on a very local basis.” So together David, Sandra and Nicholas have turned to measured data, devising a simple way to pick the most important local climate changes from it.

Weather stations around the world monitor daily conditions, and combine to create a record containing occasional extremes, lots of ordinary days, and everything in between. Knowing how common these conditions are is important for people who want to prepare for future climate change. “For flood risks, you’re worried about going over certain rainfall amounts in a given time,” David explained. “Managers of overheating buildings are worried about what proportion of the time temperatures pass certain levels.”

Dicey approach

David and his teammates’ new method uses the number of days, or distribution, of different conditions that happened in each place to reveal local climate changes. Looking specifically at temperature, they treated the proportion of days with a given temperature in the entire record as two different types of probability. The first, the probability distribution function or pdf, is perhaps the most familiar type of probability, the specific chance of a certain temperature occurring. Taking the roll of a dice as a comparison, the pdf is the kind of probability that each number has a 1 in 6 chance of landing upwards.

The second, the cumulative distribution function or cdf, looks at the number of days in a time period where measured conditions were below a certain limit. “For a dice, the probability that it’s below 2 is 1 in 6, that it’s below 3 is 2 in 6, that it’s below 7 is 100%,” David said. “You can also look at a probability limit that temperature stays below 90% of the time, for example.”

Combining both types tells the scientists whether climate change is affecting the probability of some temperatures more than others. “One option is that the distribution just shifts, everything gets 1°C or 2°C warmer,” David said. “We are asking if it does simply shift, or does it change shape as it’s moving? The average temperature may shift by one or two degrees, but the hottest days might warm by 4°C and the coldest days might not warm much at all. This method allows us to pick that out of the observations for specific locations.”

David, Sandra and Nicholas used their ideas on data from the European project E-OBS. Dividing Europe into grids, it has compiled observations of rainfall, atmospheric pressure, and maximum, minimum and average daily temperature since 1950. “We have this marvellous dataset that allows us, without a great deal of work, to just go away and process it and focus on the maths and the outputs,” David said.

Shape sorting

In a paper published on Monday they first used their method at four locations: Leiden, The Netherlands; Leon, Spain; Florence, Italy; and a point in West Wales, UK. “We looked at distributions in the 1950s and the 2000s, and how they’ve changed,” David said. They looked at daily temperatures through June, July and August through each decade. In particular, they worked out cdfs for those two different periods, and subtracted the earlier one from the later one. Dividing the result by the pdf of all temperatures for the whole period from 1950-2009 gave what the scientists called a “trend parameter”, revealing distribution shape changes.

Leiden and Florence were the hottest places. But more interestingly, their trend parameters showed that warming affected their distributions most at the limit temperatures stay below 90% of the time. Between the 1950s and the 2000s, that limit increased 2-3°C. West Wales showed no significant local changes, while Leon warmed 1-2°C across its entire distribution. As well as these four places, the scientists also worked out and mapped trend parameters across Europe. To show that the map shows real climate changes, they compared it against trend parameters from randomly shuffled E-OBS temperature data. For much of Europe they found a less than 2% chance that temperature changes over the past 60 years could have occurred at random

David’s team has already looked in more detail at its maps of Europe, and hope to submit their results for publication in a research journal next week. “This is transferring observations of weather into observations of climate, and of climate change,” he underlined. “Ideally, as a physicist, I’d like to have some explanation. Why does it vary by location? What are the local drivers? And it’s definitely not saying how climate will change in the future. But it does give us a much stronger handle on how the climate has been changing in the past and I think that’s a good starting point.”

Journal reference:

Chapman, S., Stainforth, D., & Watkins, N. (2013). On estimating local long-term climate trends Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 371 (1991), 20120287-20120287 DOI: 10.1098/rsta.2012.0287

between the 1950s and the 2000s