NOTE: for a brief, non-technical summary of this post see the UPDATE at the end. To get there, go to the full post (not just the blog’s home page), then click here.

Real data are the combination of signal and noise. By noise I don’t just mean measurement error. I mean the stochastic part of the process. That includes naturally occuring noise in the system itself — those ubiquitous wiggles up and down and up and down and down and up, that never cease but never really get anywhere. They’re not part of the trend, they’re noise. If you want to know what the trend is then you have to account for the noise.

If you claim that “ice cover is stabilising,” then you better be talking about the trend. You damn well better not be basing that conclusion on the effect of those ubiquitous wiggles that never cease but never get anywhere.



Here, for instance, is some data for Arctic sea ice area according to Cryosphere Today (for convenience I’ve transformed them to 0.02-year averages — roughly weekly — which won’t have any notable effect on the analysis which follows):

I’ve also shown a lowess smooth to indicate the trend. In addition to trend the data show noise, with lots of up-and-down variation. It’s not just point-to-point jitter, there’s persistent noise here, which reflects the fact that the noise shows autocorrelation. But it’s still noise, it still shows nonstop wiggles that don’t get anywhere and have nothing to do with the trend, or with whether or not “ice cover is stabilising.”

We can even see some of those more persistent wiggles if we smooth the data with a “faster” lowess smooth (shorter time constant) like this:

One might be tempted to think that those wiggles represent “cyclic variation” with period around 5.4 years. One would be mistaken.

We can use the residuals from the first smooth (in the first graph) to estimate the parameters of the noise — in particular, its size and its autocorrelation. We could, for that matter, use the residuals from the second, “faster” smooth, which would likely underestimate the impact of the noise, but we’ll do it both ways anyway.

Then we’ll create some purely artificial noise, which we know (by design) has no trend at all, certainly no change in its trend. Nothing. Nada. Zip. We’ll create AR(1) noise with the same lag-1 autocorrelation, and the same standard deviation, as the residuals from the smoothed sea ice area anomaly data — two versions, one for the “slow” smooth and one for the “fast” smooth. If we apply various methods of estimating the trend to the artificial data, we can get an idea how uncertain their estimates are — because any nonzero trend (or change in trend) indicated for the artificial data isn’t real. It’s just response to the noise.

How shall we estimate the trend? Here’s one way — and not a bad one at all. We’ll apply a Gaussian filter/derivative filter to the data. The derivative filter just computes the ratio of the data differences to the time differences, simulating “differentiation” and estimating the rate of change. Those are bound to show a lot of wild point-to-point fluctuation, so the Gaussian filter will smooth that out and give a sensible estimate.

We do need to be aware, however, that a derivative filter is a high-pass filter and a Gaussian is a low-pass filter, so the two in tandem are a bandpass filter. The center of the passband will suddenly look strong, whether it’s meaningful or not, and unless we’re aware of this we’ll be tempted to conclude that there’s “cyclic variation” when there isn’t.

We also need to choose a timescale for the Gaussian filter. Let’s use a 1-year filter. And, although Gaussian filters can be applied right up to the edge of time span of the data, there are “edge effects” which degrade precision — so we’ll chop off two timescales from the beginning and end just to keep things “clean.”

If we apply that rate-of-change estimation method to the actual sea ice area anomaly data from Cryosphere Today, we get this:

One might be tempted to think that all those fluctuations, those smooth ascents and descents, are real signs of change in the trend of sea ice area. But it would be downright foolish not to account for the fact that there will be fluctuations in the estimated rate, simply due to noise — those ubiquitous fluctuations that never get anywhere and have nothing to do with whether or not “ice cover is stabilising.”

The mean rate over the entire time span is a loss of about 50,000 km^2/yr. We’re most interested in how much variation the rate shows — so let’s subtract the mean rate to show just the variations from mean. This makes the graph look the same but the zero point is different):

We can estimate the size of the irrelevant noise fluctuations by applying the same rate-estimation method to the artificial data. For the “slow” residuals, the very first artificial data set (the only one I bothered to create) gave this:

It’s probably more informative to compare the variation in rate from the artificial, trendless data directly to the variation in rate from the real data:

Result: The variations in rate-of-change of sea ice area anomaly data are no bigger, no faster, no sharper, no more or less smooth, in fact not really distinguishable from the variations in rate-of-change of random noise with similar size and autocorrelation.

We can do the same with artificial noise generated using the parameters estimated from the residuals to the “fast” smooth. The very first artificial data set (the only one I bothered to create) gave this:

Result: The variations in rate-of-change of sea ice area anomaly data are no bigger, no faster, no sharper, no more or less smooth, in fact not really distinguishable from the variations in rate-of-change of random noise with similar size and autocorrelation.

Conclusion: There is no evidence that the variations in rate-of-change of sea ice area anomaly data estimated using this method are anything but the result of random noise. The mean rate-of-change is meaningful; sea ice area has certaintly declined. But the variations mean nothing because they’re no bigger than the uncertainty in those estimates.

If you didn’t bother to estimate, or even consider the existence of, uncertainty in your estimated rate of change then you’d probably reach false, in fact downright ridiculous conclusions. Like “ice cover is stabilising.”

You’d be mistaken. If you also declared “cyclic variation” which isn’t real, and said in no uncertain terms that the scientists who actually study sea ice need to get their act together because they’re missing the really important stuff, then you’d be thought a fool. Because of your hubris.

Why does the given method show such large unrealistic fluctuations? The Gaussian+derivative filter is like applying this filter to the original data:

Note that it reaches its maximum and minimum at times +1 and -1 years, because that’s the timescale of the Gaussian filter. Therefore, it’s hardly exactly equal to, but is somewhat (very roughly) akin to estimating the rate of change at time t by taking the average around time t = +1 years, subtracting the average around time t = -1 years, then dividing by 2.

That’s a valid way to estimate the rate of change. But when there’s noise, especially the kind of autocorrelated noise shown by sea ice area, it’s not a very precise way. One can hardly expect such an estimate, based as it is on such a short timescale, not to show large uncertainty due to random noise — uncertainty as large or larger than the rate-of-change itself.

There’s a lot more to Arctic sea ice area data, and sea ice extent data, and sea ice volume data. There are better ways to compute anomaly which account for some of the recent changes, and better enable us to estimate the properties of the noise as well as see just how unstable the Arctic ice pack is. But, that will be the topic of another post.

UPDATE

I get the impression this post has enough math to sail right “over the head” of some readers. So, here’s the brief not-too-technical summary.

Bottom line: take random noise with the same characteristics (size and autocorrelation) as the noise in real sea ice area data. Apply the same rate-estimation method used by Greg Goodman. What you get is wiggles just like he got — same size, same time scale. It’s because the noise — all by itself — will create them.

Conclusion #1: the wiggles he found are no evidence at all of any changes in the rate of sea ice loss.

Conclusion #2: Greg Goodman not only didn’t estimate the uncertainty in his analysis, he ignored its very existence.