Prediction is a big, big business these days, and even those of us who aren’t explicitly in the prediction business probably do all we can to make sense of the future. For example:

Does your company do marketing research? (If it’s a business of any size and sophistication, the answer is probably yes.)

Do you track the financial pages?

Do you keep abreast of the latest innovations in your industry (or any industry, for that matter)?

Have you factored in economic considerations when trying to decide whether or not to buy a house?

If you have an IRA, have you factored in where you think the damned economy is going in making fund decisions?

If you don’t have an IRA, is it possible that your view of the market was so dire that you decided to dump all your money into savings (or hide it under the mattress)?

Have you ever moved to a particular city in part because it had a better job market than another city?

Did you make (or are you now making, if you’re still in school) curriculum/major/grad school decisions based on your expectations of what the job market was/is going to be like?

If you’ve been lucky enough to have a choice of job offers, did you spend some time evaluating the prospects of the competing businesses (and your prospects in them) before accepting an offer?

If so – and most of you probably answered yes to at least one of these questions – then that’s all part of what I’m calling the prediction business. In some cases we’re talking about companies that are directly about predicting, and in others we’re personally making decisions based on our ability to predict – an abiilty that often hinges on data produced by companies in the prediction business. In all cases, the more we know about the future (or, put more precisely, the better our information on factors that will shape future events and the more accomplished our faculties for evaluating that information), the more likely we are to make decisions that succeed in the present and the future, and we all want that.

So, how good are we at predicting? How much of what we think we know is accurate, and how reliable are our techniques for predicting? Perhaps not as good as we’d hope.

So Close, Yet So Far Away

Consider a recent BBC story on efforts to detect terrorists, which was forwarded along by my colleague Whythawk. It starts out with an intriguing premise: what if you had a method that was 90% effective at telling whether or not someone was a terrorist? Not bad, right? But then the analysis takes a nasty left turn.

You’re in the Houses of Parliament demonstrating the device to MPs when you receive urgent information from MI5 that a potential attacker is in the building. Security teams seal every exit and all 3,000 people inside are rounded up to be tested. The first 30 pass. Then, dramatically, a man in a mac fails. Police pounce, guns point. How sure are you that this person is a terrorist?

A. 90%

B. 10%

C. 0.3% The answer is C, about 0.3%.

Huh?

The article goes on to explain the math:

If 3,000 people are tested, and the test is 90% accurate, it is also 10% wrong. So it will probably identify 301 terrorists – about 300 by mistake and 1 correctly. You won’t know from the test which is the real terrorist. So the chance that our man in the mac is the real thing is 1 in 301.

My guess is that very few readers guessed C – I know I didn’t – and the fact that most of us aren’t in the terrorist hunting business is no solace. The problem is that unless we’re serious math types, we probably rely, at least occasionally, on techniques that are actually less effective than we think they are.

Our Pathological Need to Know

One of the hottest business books out there right now is Nassim Nicholas Taleb’s The Black Swan. Taleb, who is equal parts philosopher, math whiz and trading savant, wreaks havoc with the world of financial analysis, and in light of our current economic condition and the factors that helped us get here, you can imagine how a book of this nature might strike a nerve.

Taleb’s central thesis is that a small number of unexpected events – the black swans – explains much of import that goes on in the world. We need to understand just how much we will never understand is the line. ‘The world we live in,’ he likes to say, ‘is vastly different from the world we think we live in.’

…

When it comes to finance, collective wisdom has shown itself to be close to astrology – based on nothing. But according to Taleb, unpredictable events – 9/11, the dotcom bubble, the current financial implosion – are much more common than we think.

He spends a lot of time, for obvious reasons, on finance, but the sum total of Taleb’s thesis is much broader: our need to know blinds us and leads us to rely on tools that can’t be trusted.

The Butterfly Effect

Toward the end of the book we discover that Taleb was a disciple of Benoit Mendelbrot, the father of fractal geometry and the man who introduced to the principle of sensitivity to initial conditions – better known as the “butterfly effect.” Stated simply, this principle says that even very small changes in a system can lead to huge changes in the results, and the implications for most kinds of research and modeling are huge. To wit, the popular assertion that a butterfly flapping its wings here today can lead to a hurricane next year in China.

Much research – did I say “much”? How about nearly all – assumes that we can control for non-relevant factors. A variety of sampling methods (randomization perhaps being the most popular) are used to assure that the only difference between test groups is the factor being tested, but Mandelbrot’s work calls that assumption into question. We may assume that we have controlled for external factors, but we cannot demonstrate it as fact. (I’m mostly beating up research on humans and human systems here – research in the hard sciences is far more precise.)

It is far, far harder to predict than we might suspect, and this goes for those in the business of selling prediction, as well.

How Can We Improve Our Chances of Getting it Right?

So, if we can’t know or predict anything, what can we do? Pack it up and go home?

Not exactly. I’m not here to suggest that the task is hopeless, that it is impossible to know or predict. A few strategies are recommended to those who’d like to nudge their confidence levels up a bit, though. Taleb offers some very useful advice – abd again, read the book. In addition, here are a few more ideas to think about.

First, there’s value in diversifying your sources. If you rely on one tool, one model, one expert, one information source, well, that’s like going to Vegas and putting your life savings on 32. We’re not talking about predicting anymore, we’re talking about praying for blind luck.

Second, there’s value in diversifying the type of source. We’re a culture with a rage for quantification – we believe that numbers don’t lie and that the only way to measure and evaluate is with statistics. To be sure, stats can tell us a lot, but the problem is that the knowledge you gain is a mile wide and inch deep (and this assumes that you’ve managed to construct a reliable quant instrument – note the observation above about these sorts of assumptions). The most effective research programs in my world (marketing) also rely on qualitative methods – focus groups, interviews, observation, case histories, etc. The value to using multiple techniques is twofold. On the up side, you get a much richer picture of the reality surrounding your research question. On the down side – that is, if something is a little off – the more independent tools you’re using, the better your chance of catching errors.

Finally, there’s no substitute for a critical mind. Never accept any claim or data point at face value and be as rigorous in your assessments of methodology as you are results. And especially, go in fear of people who are married to one method. All too often, as Taleb demonstrates, these people are ideologues who value the beauty and symmetry of their theories above the messiness of reality.

So we probably don’t know as much as we think we do, but if we approach the task of learning and predicting critically, we have a lot better shot.