A recently published paper by Ludecke et al. in Climate of the Past claims, as its main result, that “the climate dynamics is governed at present by periodic oscillations.”



The authors took 6 long temperature records from central Europe, standardized them (divide anomalies by their standard deviation), and averaged them. They then computed annual mean values. That was subjected to Fourier analysis in order to identify what they call “significant” frequencies. They selected 6 frequencies, all with periods at least 30 years long, with which to model the temperature data. Comparison of the model to a 15-year moving average (boxcar filter) of the data gives a correlation coefficient 0.961. Presto! — “the climate dynamics is governed at present by periodic oscillations.”

If you want to know a little more about the data itself, the Rabett has some info on that.

First, here’s my short opinion of this paper: Rubbish.

Let me elaborate. All they’ve done is model the data as a low-frequency Fourier series, then compared that to the low-frequency (boxcar filtered) version. Of course it gives a good match, especially since the actual trend present in this data is dominated by low-frequency fluctuation. In essense, all they’ve shown is that an arbitrary function can be modelled by a Fourier series. Really. Truly. That’s all.

They base their confidence in the non-random nature of their fit on this:



The Pearson correlation of the smoothed record SM6 with the reconstruction RM6 (black and red curves in Fig. 6) has a value of r = 0.961. In order to ascertain the statistical confidence level of this accordance, we assumed a null hypothesis and evaluated it by Monte Carlo simulations based on random surrogate records of the same length and the same Hurst exponent (a = 0.58) as M6 generated by a standard method (Turcotte, 1997) (the surrogate records hereafter SU, and the boxcar-smoothed SU over 15 yr hereafter SSU). As the null hypothesis we assumed that the accordance of the reconstruction RM6 with SM6 is caused by chance. We applied 10 000 surrogate records SU. Each of the record was analyzed following the same procedure as for M6. Next, for each surrogate SU the reconstruction was generated that used — again following the procedure as for M6 — six frequencies with the strongest power densities among the first eight frequencies of the DFT without zero padding. Finally, the Pearson correlation of this reconstruction with SSU was evaluated. As a result, among 10 000 SU we found one surrogate record with the maximal r = 0.960, 9 records with r > = 0.95, and 53 records with r > = 0.94 . Therefore, the null hypothesis could be rejected with a confidence level of > 99.9%.



But this totally ignores the fact that there is a trend in the data, that the trend is low-frequency, so a low-frequency Fourier series will match much better than a low-frequency Fourier series to a random time series. Really. Truly. They are actually so far removed from reality that they don’t even know what they’re doing.

Allow me to illustrate.

We don’t need to average 6 European temperature records to reproduce their result. Let’s just use one: Hohenpeissenberg, Germany. Here’s the data, annual averages from 1781 through 2011, together with a very slow smooth just to show the very long-term pattern:

Let’s model the Hohenpeissenberg data using Fourier frequencies corresponding to periods at least 30 years long. I don’t need 6 frequencies, I can get by with just 5, and I too can compare that model to a 15-year moving average:

As the graph indicates, I too can get a bitchin’ good correlation coefficient between the multi-frequency model and the moving averages. In fact, at 0.9617 it’s just a teensy-weensy bit better than the 0.961 they reported for their 6-station average.

Let’s try something else. Let’s take the very long-term pattern in the Hohenpeissenberg data — the red line in the very first graph — and add to that, plain old random noise with the same standard deviation as the residuals of the Hohenpeissenberg data from that very long-term pattern. This will give us some artificial data consisting of a very slow — and not periodic — signal, plus random noise. We’ll see what a Fourier series does when we do not leave out the trend.

We’ll model that artificial data using frequencies corresponding to periods at least 30 years long. I’ll only need 4 frequencies to do the job. Let’s compare that to the 15-year moving average, and compute the correlation coefficient between them. Here it is:

Well well … this time the correlation coefficient is just a hair over a whopping 0.98. I didn’t have to generate 10,000 artificial data sets to surpass their correlation coefficient. Just 1.

Applying the logic of Ludecke et al., “Presto! — the random noise is governed at present by periodic oscillations.”