In 1999, Paul Krugman (2008's Nobel prize winner in economics) published The Return of Depression Economics, a sort of catalog of the catastrophic failures in the industrialized economies of Asia and Latin America throughout the 1990s. Between 1991 and 1998, severe financial crises struck Japan, Russia, Brazil, Mexico, Thailand, and Indonesia. Although the trigger for crisis was slightly different in each instance, the commonality was that when things went wrong, they began to spiral out of control very quickly.

Upon reading Krugman's work, one is hardly so surprised by the current economic crisis. But if this crisis was so foreseeable, how come so few people foresaw it?

Let me divert your attention momentarily to something more important: baseball. Each year, baseball teams bid against one another for free agents — players like CC Sabathia, Mark Teixeira, and A. J. Burnett, the trio of stars whom the (apparently recession-proof) New York Yankees have signed for more than $400 million. One might assume that with that sort of money at stake, teams would do their homework. After studying the free-agent bidding process for many years, however, I have concluded that this money is not generally well spent. Teams tend to discount risk, particularly the risk of injury. And they tend to place too much emphasis on recent performance as opposed to a player's longer track record, invariably overpaying for a player who had a career year.

These contracts are one manifestation of a phenomenon known as recency bias — the tendency to place too much weight on recent events. Think of the gambler who doubles her bet at the blackjack table because she's won her past couple of hands — her odds haven't changed any, but her perception of them has.

The invisible hand of the market is supposed to uncover such inefficiencies. If home buyers are obviously overpaying for properties, for instance, a well-informed investor like a large hedge fund ought to be shorting mortgage stocks. If the American economy is weakening, possibly into recession, a venture-capital firm ought to be liquidating its assets. A few prescient investors, certainly, did exactly these sorts of things. But for the most part, the economy was caught with its invisible hand in the cookie jar. Could recency bias have been the culprit?

"Most economists are so caught up in the theory du jour that they fail to study economic history," Maggie Mahar, author of Bull! A History of the Boom and Bust, 1982-2004, wrote in an e-mail to me. "The most successful investors I have ever known were steeped in market history. History didn't mean 'what's happened in the past 15 years since I have been on Wall Street' — it meant what has happened since World War II."

I decided to set up a rudimentary statistical experiment. Suppose it's January 2008 and you're an investor — or an economist — trying to forecast the probability of a major downturn in the United States economy. We'll define such a downturn as occurring any time real GDP falls at an annualized rate of 4 percent in any one quarter, which is about equal to the decline experienced in the fourth quarter of 2008. Between 1947 and 2007, quarterly GDP fell by this amount on eleven separate occasions.

About the simplest economic model one can build is to assume that fluctuations in GDP occur randomly and follow a normal bell-curve distribution. From there, it is reasonably straightforward to calculate the percentage chance of a "crash" — a 4 percent decline in GDP — in any given quarter.

If you had done this calculation in January 2008, using data from the past sixty years — roughly since the end of World War II — you would have estimated the chance of a crash in the upcoming quarter to be 3.17 percent, implying about one crash for every eight years of economic activity. Not exactly an everyday occurrence, but certainly something you would need to be prepared for.

But suppose instead that your time horizon is shorter. You decide to look only at data from the past twenty years — or essentially since Alan Greenspan took over as chairman of the Federal Reserve in 1987. With that time horizon, the risk of a crash appears to be very low indeed. In fact, you would assess the probability to be just 0.04 percent — meaning one crash for every 624 years. This deceptively simple choice of assumptions, then, turns out to make a huge amount of difference. If you'd looked at data from the mid-eighties onward, instead of from World War II onward, you'd have underestimated the risk of a crash by a factor of about 80.

Moreover, if you were to conduct this "crash-risk" calculation retroactively and graph it over time, you'd identify a couple of inflection points at which the potential impact of recency bias increases. Using a twenty-year time horizon, your perceived risk of a crash would fall dramatically as of about 1995, once you started to "forget" about the oil crisis of the mid-1970s — this happens to coincide with the beginnings of the dot-com bubble. Then a few years later, as of 2001 or 2002, the risk of a crash would fall almost to zero, as the economic turmoil of the early 1980s begins to be written off by investors. This happens to coincide with the start of the housing bubble.

It doesn't necessarily take an unforeseeable combination of events, then, to precipitate a market crash like the one we're now experiencing; a short memory span would suffice.

Still, market crashes do not always precipitate economic catastrophe. The crash of 1987 was hardly felt by the broader economy, and the dot-com bust was weathered relatively well. The difference today might have been that the crash was triggered by a collapse of housing prices, something that had ramifications far deeper down the wealth pyramid. (As of 2004, the top 10 percent of American income-earners held about 78 percent of the value of stocks but just 34 percent of home equity.)

Some economists certainly understood that the housing bubble was occurring. But there were few recent examples to draw from in terms of gauging its impact on the broader economy. (The last precipitous decline in American housing prices came during World War I.) What the profession might have done is to overestimate the strength of the American economy's immune system.

There are two seemingly contradictory conclusions to this parable. First, economists and investors ought to be wary of unknown unknowns — what Nassim Nicholas Taleb calls "black swans." The feedback mechanisms of the economy are sufficiently complex that no amount of foresight or sound fiscal policy will be able to entirely stave off the possibility of a crash, no matter how much stability the economy seems to have achieved. And second, they also need to remember just how long the long run is.

In fact, perhaps our recency bias has led us to conclude that the situation today is worse than it actually is. Housing prices are significantly off their peaks, for instance, but have still increased by roughly 20 percent since January 2000, after adjustment for inflation. And we remain wealthier now than we were at almost any other point in the past (per capita disposable income is about 18 percent higher than it was a decade ago). If the good times are never as good as they seem, neither, perhaps, are the bad ones so bad.

Nate Silver runs the political-prediction Web site FiveThirtyEight.com and is an analyst and writer for Baseball Prospectus.

RELATED STORIES:

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io