So You Think You're Smarter Than A CIA Agent

Enlarge this image toggle caption Saul Loeb/AFP/Getty Images Saul Loeb/AFP/Getty Images

The morning I met Elaine Rich, she was sitting at the kitchen table of her small town home in suburban Maryland trying to estimate refugee flows in Syria.

It wasn't the only question she was considering; there were others:

Will North Korea launch a new multistage missile before May 10, 2014?

Will Russian armed forces enter Kharkiv, Ukraine, by May 10? Rich's answers to these questions would eventually be evaluated by the intelligence community, but she didn't feel much pressure because this wasn't her full-time gig.

"I'm just a pharmacist," she said. "Nobody cares about me, nobody knows my name, I don't have a professional reputation at stake. And it's this anonymity which actually gives me freedom to make true forecasts."

Rich does make true forecasts; she is curiously good at predicting future world events.

Better Than The Pros

For the past three years, Rich and 3,000 other average people have been quietly making probability estimates about everything from Venezuelan gas subsidies to North Korean politics as part of the Good Judgment Project, an experiment put together by three well-known psychologists and some people inside the intelligence community.

According to one report, the predictions made by the Good Judgment Project are often better even than intelligence analysts with access to classified information, and many of the people involved in the project have been astonished by its success at making accurate predictions.

Enlarge this image toggle caption Viktor Drachev/AFP/Getty Images Viktor Drachev/AFP/Getty Images

When Rich, who is in her 60s, first heard about the experiment, she didn't think she would be especially good at predicting world events. She didn't know a lot about international affairs, and she hadn't taken much math in school.

But she signed up, got a little training in how to estimate probabilities from the people running the program, and then was given access to a website that listed dozens of carefully worded questions on events of interest to the intelligence community, along with a place for her to enter her numerical estimate of their likelihood.

"The first two years I did this, all you do is choose numbers," she told me. "You don't have to say anything about what you're thinking, you don't have to justify your numbers. You just choose numbers and then see how your numbers work out."

Rich's numbers worked out incredibly well.

She's in the top 1 percent of the 3,000 forecasters now involved in the experiment, which means she has been classified as a superforecaster, someone who is extremely accurate when predicting stuff like:

Will there be a significant attack on Israeli territory before May 10, 2014?

The Superforecasters

In fact, she's so good she's been put on a special team with other superforecasters whose predictions are reportedly 30 percent better than intelligence officers with access to actual classified information.

Rich and her teammates are that good even though all the information they use to make their predictions is available to anyone with access to the Internet.

When I asked if she goes to obscure Internet sources, she shook her head no.

"Usually I just do a Google search," she said.

Enlarge this image toggle caption Chung Sung-Jun/Getty Images Chung Sung-Jun/Getty Images

And that raises this question:

How is it possible that a group of average citizens doing Google searches in their suburban town homes can outpredict members of the United States intelligence community with access to classified information?

How can that be?

Lessons From A Dead Ox

"Everyone has been surprised by these outcomes," said Philip Tetlock, one of the three psychologists who came up with the idea for the Good Judgment Project. The other two are Barbara Mellers and Don Moore.

For most of his professional career, Tetlock studied the problems associated with expert decision making. His book Expert Political Judgment is considered a classic, and almost everyone in the business of thinking about judgment speaks of it with unqualified awe.

All of his studies brought Tetlock to at least two important conclusions.

First, if you want people to get better at making predictions, you need to keep score of how accurate their predictions turn out to be, so they have concrete feedback.

But also, if you take a large crowd of different people with access to different information and pool their predictions, you will be in much better shape than if you rely on a single very smart person, or even a small group of very smart people.

"The wisdom of crowds is a very important part of this project, and it's an important driver of accuracy," Tetlock said.

The wisdom of crowds is a concept first discovered by the British statistician Francis Galton in 1906.

Galton was at a fair where about 800 people had tried to guess the weight of a dead ox in a competition. After the prize was awarded, Galton collected all the guesses so he could figure out how far off the mark the average guess was.

It turned out that most of the guesses were really bad — way too high or way too low. But when Galton averaged them together, he was shocked:

The dead ox weighed 1,198 pounds. The crowd's average: 1,197.

Finding The True Signal

"There's a lot of noise, a lot of statistical random variation," Tetlock said. "But it's random variation around a signal, a true signal, and when you add all of the random variation on each side of the true signal together, you get closer to the true signal."

In other words, there are errors on every side of the mark, but there is a truth at the center that people are responding to, and if you average a large number of predictions together, the errors will end up canceling each other out, and you are left with a more accurate guess.

That is the wisdom of the crowd.

The point of the Good Judgment Project was to figure out if what was true for the dead ox is true for world events as well.

It is.

Enlarge this image toggle caption Uncredited/AP Uncredited/AP

In fact, Tetlock and his team have even engineered ways to significantly improve the wisdom of the crowd — all of which greatly surprised Jason Matheny, one of the people in the intelligence community who got the experiment started.

"They've shown that you can significantly improve the accuracy of geopolitical forecasts, compared to methods that had been the state of the art before this project started," he said.

What's so challenging about all of this is the idea that you can get very accurate predictions about geopolitical events without access to secret information. In addition, access to classified information doesn't automatically and necessarily give you an edge over a smart group of average citizens doing Google searches from their kitchen tables.

How Will It Be Used?

Matheny doesn't think there's any risk that it will replace intelligence services as they exist.

"I think it's a complement to methods rather than a substitute," he said.

Matheny said that though Good Judgment predictions have been extremely accurate on the questions they've asked so far, it's not clear that this process will work in every situation.

"There are likely to be other types of questions for which open source information isn't likely to be enough," he added.

In a couple of weeks, the Good Judgment Project will start recruiting more forecasters for its experiment, and Elaine Rich, the suburban Maryland pharmacist, thinks more people like her should give it a shot.

"Health care people are not likely to be involved in international forecasting," she said. "But I have a feeling that many of them would be good at it."