The wonderful thing about forecasts is that they all sound very profound



It’s that time of year again. Time for you to make your predictions for 2013.

You’re kidding, right? You’re asking an economist for predictions?



Just my little joke. But surely you’re not a proper economist if you can’t make a few predictions. Isn’t that the whole point of the economic profession – to make dozens of mutually contradictory forecasts with impunity?

Well, the impunity is a topic worth discussing. But the economics profession could do with a few more disagreements, I think. In 1995, FT columnist John Kay examined the record of British economic forecasters from 1987 to 1994. He discovered that they tended to all say much the same thing. The only dissenter was reality: economic growth often fell outside the range of all 34 forecasts.



So economists are terrible forecasters. What else is new?

It isn’t just economists who are terrible forecasters. Take the quantitative analysts responsible for Goldman Sachs’s notorious “25 standard deviation” episode – presumably physicists or mathematicians.



25 standard deviation?

At the beginning of the financial crisis, the chief financial officer of Goldman Sachs explained that the firm was seeing “25 standard deviation moves, several days in a row” – a statement that, translated into English, means “according to our models, what we’re seeing is very unlucky”.



How unlucky?

Oh, the sort of bad luck you see once every 28, 900, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000 years, given certain assumptions about what Goldman might have meant. For reference the universe is about 14,000,000,000 years old. The alternative to the “very unlucky” hypothesis, of course, is that the quantitative models didn’t produce very good forecasts.



Well, that’s a forecast so bad that I can’t believe an economist wasn’t involved somewhere.

You may be right. But I can give you another example: the 300-odd experts recruited by Philip Tetlock, the psychologist, for his epic study of forecasting in political science. Prof Tetlock’s conclusions are wide-ranging and painstaking, but if I can be forgiven an excessively brief summary, he finds that all sorts of people with plausible claims to expertise – diplomats, political advisers, journalists and academics – produce lame forecasts of political and economic events.



Nate Silver seems to be able to forecast just fine.

Well, yes, notwithstanding the politically motivated “Nate Silver can’t add up” school of criticism, Mr Silver, and other statisticians such as Drew Linzer and Sam Wang, successfully forecast the outcome of the US elections in some detail. Contrasted against a background of bloviation, it was impressive. But if psephology is Exhibit A in the Museum of Successful Social Science Forecasts, let’s reflect on how modest our ambitions must have become: US elections are frequently repeated, with behaviour that shows considerable historical persistence, and an astonishing amount of detailed quantitative data are available. The elections take place at a fixed date, according to well-understood rules, and with a narrowly defined space of possible outcomes. It’s easy to see that forecasting a win for Barack Obama, while better than forecasting a win for Mitt Romney, is not quite as hard as successfully predicting if and when Greece will leave the eurozone.



You’re pretty quick with the excuses.

No excuses. We just can’t see into the future. I don’t think that’s any surprise, nor an embarrassment. The question is why there’s such a hunger for social science predictions, when the practice is so transparently pointless.



It’s a test of expertise.

If so, then monkeys are as expert as professors of political economy. I wouldn’t want to be quite so cynical. I think forecasting in a complex world is a poor test of expertise because luck is the overwhelming success factor.



So why do we love predictions?

No idea. Here’s one guess: saying “the UK economy will recover strongly in 2012” or “President Assad will be out of office by June” compresses a vast amount of expertise and analysis into a few words.



But the words are probably meaningless.

Yes. But it’s Christmas. Actually studying the situation in detail is far too much like hard work. The wonderful thing about a forecast is that both the forecaster and his audience feel that something profound has been expressed. And nobody will remember the forecast anyway.

Also published at ft.com.