Jonathan Falk points to this book excerpt by Michael Lewis, who writes:

A lot of what people did and said when they “predicted” things, Morey now realized, was phony: pretending to know things rather than actually knowing things. There were a great many interesting questions in the world to which the only honest answer was, “It’s impossible to know for sure.” “What will the price of oil be in ten years?” was such a question. That didn’t mean you gave up trying to find an answer; you just couched that answer in probabilistic terms. . . . People who didn’t know Daryl Morey assumed that because he had set out to intellectualize basketball he must also be a know-it-all. In his approach to the world he was exactly the opposite. He had a diffidence about him—an understanding of how hard it is to know anything for sure. The closest he came to certainty was in his approach to making decisions. He never simply went with his first thought. He suggested a new definition of the nerd: a person who knows his own mind well enough to mistrust it.

I recommend reading Lewis’s entire article. There were a bunch of good stories, and some other bits got my attention too.

Like this:

It seemed to him that a big part of a consultant’s job was to feign total certainty about uncertain things. In a job inter­view with McKinsey, they told him that he was not certain enough in his opinions. “And I said it was because I wasn’t certain. And they said, ‘We’re billing clients five hundred grand a year, so you have to be sure of what you are saying.’” The consulting firm that eventually hired him was forever asking him to exhibit confidence when, in his view, confidence was a sign of fraudulence.

That was before there were Ted talks. But same idea: be confident, be strong, fake it till you make it, and, if anybody goes back and checks your predictions, just change the subject.

And this:

A lot of what the Houston Rockets did sounds simple and obvious now: In spirit, it is the same approach taken by algorith­mic Wall Street traders, U.S. presidential campaign managers, and every company trying to use what you do on the Internet to predict what you might buy or look at. There was nothing simple or obvious about it in 2006.

It’s funny, though, because these ideas were no secret back in 2006. Bill James became famous in the mid-80s, and by the end of the decade we were using hierarchical models to predict elections. It is true, though, that there wasn’t much general interest in quantitative prediction. It’s funny how long it all took. When it comes to polling analysis, we wrote our first paper on Mister P back in 1997 but only really started to hit the public consciousness twenty years later. Or, for an even more accessible example, people been complaining for decades about football coaches being too conservative on fourth down—and there it’s not hard to get some numbers!—but it’s only in the past few years that the default strategy has changed.

Why did it take so long? One reason is that the best way to improve predictions is not through better modeling but through more information. Lewis writes:

The Rockets began to gather their own original data by measuring things on a bas­ketball court that had previously gone unmeasured. Instead of knowing the number of rebounds a player had, for instance, they began to count the number of genuine opportunities for rebounds he’d had and, of those, how many he had snagged. They tracked the scoring in the game when a given player was on the court, compared to when he was on the bench. Points and rebounds and steals per game were not very useful; but points and rebounds and steals per minute had value.

This doesn’t explain the fourth-down-in-football problem—there, the standard explanation is asymmetric incentives, that, even after correcting for the probabilities, the reputational risk to the coach of going for it, not getting the first down, and then giving up a score, is greater than the gain from going for it, getting the first down, and continuing on to score. The idea is that nobody ever got fired for going by the book. But I don’t know how much I believe that story: Coaches do want to win, right?

Lewis’s article also features a bunch of interesting ideas on the work done, by the guys who evaluate prospective draft picks, to avoid getting fooled by misleading information—or, perhaps I should say, inappropriate inferences, as the problem wasn’t so much with the information as with what they were doing with it.