There’s a mini-debate with important economic implications going on over the use of a statistical technique called the Hodrick-Prescott filter. Tim Duy has a round-up and some thoughts of his own. But I thought I should weigh in, partly because I encountered this very same issue way way back when regarding Japan (pdf), partly because what may look like a debate about statistical technique is actually a crucial debate about the nature of our ongoing economic disaster.

So, the HP filter is a technique that is supposed to extract underlying trends from data in which there is a lot of short-term fluctuation around the trend. To do this, it “smooths” the data — roughly speaking, it takes a weighted average over a number of years. This smoothed measure is then supposed to represent the underlying trend.

When applied to business cycles, the HP filter finds a smoothed measure of real GDP, which is then taken to represent the economy’s underlying potential, with deviations from this smoothed measure representing unsustainable temporary deviations from potential.

And what’s happening now, as Tim Duy points out, is that some people — including Fed officials — are using this kind of filter to argue that the US economy is already operating near potential, so that there is no reason to pursue expansionary monetary and fiscal policy.

What’s wrong with this view? The answer is that a statistical technique is only appropriate if the underlying assumptions behind that technique reflect economic reality — and that’s almost surely not the case here.

The use of the HP filter presumes that deviations from potential output are relatively short-term, and tend to be corrected fairly quickly. This is arguably true in normal times, although I would argue that the main reason for convergence back to potential output is that the Fed gets us there rather than some “natural” process.

But what happens in the aftermath of a major financial shock? The Fed finds itself up against the zero lower bound; it is reluctant to pursue unconventional policies on a sufficient scale; fiscal policy also gets sidetracked. And so the economy remains below potential for a long time.

Yet the methodology of using the HP filter basically assumes that such things don’t happen. Instead, any protracted slump gets interpreted as a decline in potential output! Here’s the chart I made way back in 1998 for the 1930s:

Yep: the HP filter “decided” that the US economy was back at potential by 1935. Why? Because it automatically interpreted the Great Depression as a sustained decline in potential, because by assumption the filter incorporated such slumps into its estimate of the economy’s potential. Strange to say, however, it turned out that there was in fact a huge amount of excess capacity in America, needing only an increase in demand to be put back into operation.

It seems totally obvious to me that people who are now using HP filters to argue that we’re already at full employment are making exactly the same mistake. They have in effect, without realizing it, assumed their answer — using a statistical technique that only works if prolonged slumps below potential GDP can’t happen.

As always, statistical techniques are only as good as the economic assumptions behind them. And in this case the assumptions are just wrong.