BOM homogenization errors are so big they can be seen from space

It’s just not cricket. And in so many ways.

Shame to let a perfectly good dataset go to waste… Australian data comes from some of the longest stations running in the Southern Hemisphere; it could be useful. Instead we get more evidence here that the BOM’s magical and secret homogenization adjustments can take poor data and spread false signals into better data. Homogenisation errors are already visible in a site-by-site analysis, but this shows the problems may be so big they affect averages across the whole of Australia, and we can detect them with satellites.

Tom Quirk continues comparing the satellite record of Australia with the BOM surface version. Previously, he (and for the record, Ken Stewart in 2015) showed that some discrepancies are due to the effect of heavy rain or drought. But now he looks further and finds that not-so-coincidentally, the largest gaps and most “inexplicable” differences occur in the mid nineteen-nineties, the same years the BoM shifted from using old large Stevenson screens to electronic thermometers. Around the same time, the large screens were often also swapped for much smaller ones too — like double jeopardy for data. Oddly, spookily, the BOM makes many adjustments to data during those same years that are vaguely referred to as “statistical” adjustments (rather than specifically called site moves or screen changes), and it is exactly these kinds of adjustments that are implicated here. (All together now with the cliche du jour: hullo lies, damned lies and “statistics”?)

The big clue comes from correlations – on a yearly average the two datasets look very similar, but it’s artificial — on a monthly scale a few key correlations fall apart.

You might think that changing instruments should be no big deal — the BoM just has to run both types of instrument side by side for a couple of years and analyze and adjust accordingly. But as we’ve heard before, they claim they do that, but they won’t publish it, and when skeptics like Bill Johnston ask, they admit they’ve deleted the data. Instead of using this simple, obvious approach, the BoM “corrects” the record by getting data from another statistically selected thermometer which may be hundreds of kilometers away and which may also have been changed/shifted/degraded/watered/cleared or had a ten-lane highway installed next door.Edit

Tom Quirk implies homogenization is a process that can be improved but I think it should be thrown away — we need to start from scratch. We need a proper historical, documentary analysis of each and every site first (and a full independent audit of the BoM). There is no point blending bad data with good. False signals are smeared across real data. Homogenization is vandalism.

If the Australian Bureau of Meteorology’s work was a million-dollar scandal involving celebrities breaching international rules and hiding secrets down their pants, they’d be on every news talk show and problems would’ve been fixed ten years ago. Instead it’s a billion dollar scandal, international guidelines are blitzed, and meh.

Jo

h/t to both Tom and Barry C for the cricket scandal comparison.

————————————————————————————–

Guest Post by Tom Quirk

Comparison UAH and BOM temperatures and homogenization Part II

(Part I: Mystery solved: Rain means satellite and surface temps are different.)

Near-ground temperatures in Australia have been subject to a process called homogenization. This process adjusts temperatures at a given location to take into account nearby temperature measurements as preparation for area estimates of temperature. Fortunately the satellite measurements of the lower troposphere (UAH) provide an opportunity to audit the Australia-wide near surface measurements of the BOM. Figure 1 shows a comparison with a correlation coefficient of 83 +/- 5 % which is very respectable.

However when the comparison is made on a monthly basis the correlation coefficient falls to 68 +/- 2 %. That detail is shown in Figure 2.





The range of values for the correlation coefficient is from a maximum of 91% to a minimum of -8%. Curiously, the loss of correlation occurs in the period 1995 to 1998 at the same time as the automatic weather stations were introduced.

This loss of correlation will be examined firstlyon a year by year basis and then on a month by month basis from 1979 to 2017.

12 monthly measurement correlations

The first test is to look at the 12 monthly measurement correlations year by year to see if any particular years stand out. Figure 3 shows extremes from a high correlation coefficient of 88% in 1999 to a low of 12% in 1996. The average 12 month correlation coefficient is 64% to be compared with the correlation coefficient of 83% for the 39 year annual time series.





The temperature anomalies for the two years with the lowest correlation coefficient, 1996 and 1997 are shown in Figure 4. There are very large temperature anomaly differences of between 1 and 2°C.

1979 to 2017 measurement correlations month by month

The second test is to look at the 39 year measurement correlations month by month to see if there are particular months where the two datasets diverge. This can be seen in Figure 5 Left and shows most months have a decent correlation coefficient above 70%, peaking at 88% in September. But things come apart in February and December when correlations fall to 40%. In the ten year periods of 2007 to 2017 and 1979 to 1989, the December correlation falls to -40% (Figure 5 Right).

Scatter plots of low-correlation months also show some significant differences (Figure 6). Note that there are quite different trend lines for December as shown in Figure 6 Right that reflect the positive and negative correlation coefficients in December shown in Figure 5 Right.





Source of low correlations from ACORN-SAT data

The Australia-wide temperature is constructed using ACORN-SAT temperatures. ACORN-SAT is the official dataset used to report on climate variability and change by the Australian government, CSIRO, and also university researchers. Adjustments are made as step-changes, which are promulgated backwards in time. Temperature measurements are homogenised, that is to say, adjusted by reference to nearby temperature measurements.

The reasons for the temperature adjustments for the period 1979 to 2017 are listed below with the number of changes made for each class of adjustment. Note that there is no supporting observational evidence for the changes when they are described as “statistical” adjustments.

Adjustment Statistical 91 Move 80 Merge 52 Move/screen 2 Screen 2 Site env 3 AWS 2 Total 232

In addition there are seasonal adjustments in 65 of the 232 all-year adjustments:

Seasonal changes Summer Autumn Winter Spring Dec Jan Feb Mar Apr May Jul Jun Aug Sept Oct Nov Total 20 11 14 20 Statistical 3 1 1 3

The years in which adjustments are made is shown in Figure 7. The period 1993 to 1998 shows a peaking in adjustments and this is the period when the UAH – BOM 12 monthly correlations are at a low…





The period 1993 to 1998 is when the automatic weather stations (AWS) replaced mercury and alcohol thermometers. Consequently sites were moved and time series merged.

This would explain the loss of correlation between lower troposphere and near surface temperatures.

The month in which adjustments are made is shown in Figure 8. The changes are made on the first of the month so the temperature adjustment appears in the previous month. So a 1st January change in 1995 is added to all preceding days, months and years starting at 31st December 1994.





The monthly distribution of adjustments explains the loss of correlation in December (Figure 5). Looking at the years when adjustments were made (Figure 7), there are no statistical adjustments for the period 2008 to 2017, and the correlation coefficient for December is similar to the earlier months (Figure 5 Right). But there are 58 statistical adjustments from 1989 to 2006, all of which will reduce the December correlation found for 1979 to 1988, and in that period there are a further 33 statistical adjustments and the correlation coefficient falls to -40% (Figure 5 Right). However the low correlation coefficient for February increases from February to July due to the interaction of rainfall with evaporative cooling lowering the surface temperatures over a period of months, and thus lowering the correlation coefficient for the UAH – BOM comparison.

The years in which seasonal changes are made is shown in Figure 9. There is a peaking of adjustments in the period 1993 to 1998 when the automatic weather stations (AWS) replaced mercury and alcohol thermometers.





This would add to the loss of correlation between lower troposphere and near surface temperatures.

Conclusion

There is a clear connection between the loss of correlation between UAH and BOM temperatures and increasing adjustments seen in the ACORN-SAT temperatures. The sources of the differences are likely to be due toinstrument changes and particularly statistically derived temperature step changes.

The analysis shows that the homogenization process applied to the construction of the Australia wide temperature is probably adding to the flaws in the datasets rather than correcting for them.

It would be useful to see whether improvements are possible by excluding statistically derived shifts and with a careful approach to step changes. Further a comparison with the USA 48 states near ground and troposphere temperatures might give rise to some further improvements.

BACKGROUND:

The BOM trend is higher than UAH but the difference is not significant (as seen in the last post on this topic last week)

BOM annual temperatures are averaged from 1979 to 2017 and normalized to UAH average, a -0.33 °C adjustment. The temperature increases are:

UAH 0.176 +/- 0.036 °C per 10 years

BOM 0.154 +/- 0.048 °C per 10 years

There is no significant difference in trends at 0.022 +/- 0.030 °C per 10 years.

It should not come as any surprise,

That Met. Offices homogenize,

To let data read high,

So that temps. will comply,

With what governments authorize.

–Ruairi

VN:F [1.9.22_1171]

please wait... Rating: 9.6/10 (57 votes cast)