I’m putting together a syllabus for one of the courses I’ll be teaching this spring – Econ 348, The Great Recession – and in the course of searching for various views about inflation, ended up looking at a dispute from a while back involving some of the usual suspects. Brad DeLong poked fun at John Cochrane for having declared, back in 2009, that “the danger now is inflation.” Cochrane angrily denied that this was of any significance – he only said that it was a danger, he didn’t necessarily predict that it would happen within any particular time frame. (He thereby provided a demonstration of another key fact about our economic debate: nobody ever admits that they were wrong about anything, and nobody changes views in the light of evidence.) Noah Smith, characteristically, tried to find some extenuating circumstances. Etc., etc.

So anyway, retreading this old ground, I found myself thinking about Bayes’s theorem.

It seems to me that Cochrane’s position – he only said it was a danger, not that it would happen at any particular time, so it signifies nothing if it doesn’t happen even after four years have passed – is just untenable in its strong form. If saying that something is a danger carries no implications for the likelihood that it will actually occur, what is the point of saying it? You might as well stand up there and say “Nice day for weather” or sing “Mary had a little lamb.”

No, clearly talking about the danger of inflation was some kind of statement about probabilities – in particular, a statement that the probability of inflation is, according to the speaker’s model of the world, higher than it is in other peoples’ models of the world. And that means that actual events do or at least should matter – they may not prove that one model is wrong and another is right, but they should certainly affect your assessment of which model is more likely to be right.

In short, it’s a Bayesian thing.

Now, language is often vague here. But let’s do a sort of finger exercise. Imagine John, a finance professor, and Paul, an economist/columnist. (George and Ringo wisely stayed out of the whole thing.) In 2009, John says “the danger now is inflation,” while Paul says “there is little danger of inflation.” So, let’s try to assign probabilities to those statements. I don’t think it’s unfair to imagine that John was giving an 80 percent probability to serious inflation over the next four years, while Paul was giving it only a 20 percent probability.

You, as an outside observer, have no way to judge these guys, so ex ante you give each of them a 50 percent probability of having the right model.

Four years pass, and inflation fails to materialize.

You can think of this as a lottery in which there are two urns – a John urn and a Paul urn – containing black balls representing inflation and white balls representing no inflation, with the John urn containing 80 percent black balls but the Paul urn only 20 percent. (What’s a finance professor urn? A lot, when you take consulting fees into account.) First, a coin is tossed to determine which urn will be used, then a ball is drawn. You know that the ball is white; you now have to estimate the odds that it was drawn from John’s urn.

And the answer is that there’s only a 20 percent chance. Ex ante you considered it equally likely that John and Paul might be right; ex post those odds have shifted to 4 to 1 in Paul’s favor.

The point is that using hedged language doesn’t insulate you from consequences if things don’t turn out the way you were clearly suggesting they would, nor does the true point that sometimes the right model makes a wrong prediction. If your model led you to believe that inflation was a “great danger” in 2009, the fact that this danger never came to pass should substantially reduce your belief in that model – and should substantially reduce your credibility if you refuse to revise your beliefs.