The desire to know what's next is a powerful urge. History and myth are rife with examples of people trying to predict the future — medieval astrologers reading the stars for clues, ancient Greeks coming to hear Apollo speak through the oracle at Delphi, Romans performing divination rites by interpreting sheep entrails. And today, whole industries exist that aren't too different: Psychics advertise two-dollar-per-minute hotlines, newspapers print horoscopes, palm readers set up booths at street festivals.

Taking stock of the new year means contemplating a blank slate. It means staring out into the unknown, more acutely aware than usual that we don't know what will happen next. As we look ahead, we brace ourselves for the fact that every day of the next year will bring news. People we know will get engaged. There will be elections, military battles, and natural disasters. The world will change and our lives will change, and when it's over we'll look back and wonder if we could have seen any of it coming.


Dubious as it may seem in those contexts, the dream of prediction also attracts a very different breed of prognosticators: one armed not with sheep guts, but with the tools of math and science. Instead of tea leaves or Tarot cards, they wield hard drives filled with data. They conjure up systems, not fantasies. By starting with information about what has already happened — immense quantities of it — and finding inventive ways to interpret it, experts in fields from public health to national security are building increasingly sophisticated predictive models, taking advantage of new technology and new ideas about how the world is organized to push the frontiers of what we can predict.

The idea that you can systematize forecasting isn't entirely new, especially in the realm of business and finance — an economics-oriented industry group called the International Institute of Forecasters is hosting its 32d annual meeting in Boston next June. But recent years have seen an explosion of interest and creativity in the realm of data-driven soothsaying, and some in the field predict — well, they think — that they are on the cusp of something big.


"We are at a different place in analysis than we were before. We know how to find trends, we know how to handle data in [new] ways," said Thomas Wallsten, a psychologist at the University of Maryland at College Park who is helping to develop a cutting-edge model for prediction that relies on crowdsourcing. He added, "We've become much more systematic."

There are prediction techniques of all sorts in development. Some are designed to predict the spread of disease; some to determine fluctuations in the popularity of tourist destinations. Some are geared towards specific problems, like predicting the results of an election or determining the amount of electricity that a town will use during the winter. Other tools are more generic, and are flexible enough to be used by, say, both retailers looking to predict popular products and global policy analysts trying to envision future conflict.

Meanwhile, as our ability to mine vast amounts of information improves, the effort to invent the next generation of prediction tools has been fueled by an explosion of personal data, which offer the tantalizing prospect of much more fine-grained predictions through the analysis of details about people's lives.

"We're finally in a position where people volunteer information about their specific activities, often their location, who they're with, what they're doing, how they're feeling about what they're doing, what they're talking about," said Johan Bollen, a professor at the School of Informatics and Computing at Indiana University Bloomington who developed a way to predict the ups and downs of the stock market based on Twitter activity. "We've never had data like that before, at least not at that level of granularity." Bollen added: "Right now it's a gold rush."


One thing the latest prediction techniques still aren't necessarily good at, however, is the thing we want most: telling us exactly what we're in for in the year 2012 and beyond. That's because, by and large, the models used to make predictions tend to be very specialized, or proprietary, or simply untested. But as researchers continue to refine them, that may begin to change — and this, in turn, promises to raise new questions about just how much we want to know about what lies ahead.

If you want to know what's going to happen next, it might seem natural to ask an expert — but when it comes to accurate predictions, it turns out that one thing you should stay away from is expert opinion. That was the conclusion reached by University of Pennsylvania psychologist Philip Tetlock, who over the course of 20 years tracked the predictions of 284 experts who had made careers of "commenting or offering advice on political and economic trends." His findings were startling: The academics, analysts, and journalists in his sample weren't significantly better at predicting events in their fields than nonexperts, and most of them would have been beaten by a "dart-throwing chimpanzee.''


Tetlock's findings, which he collected in a 2005 book called "Expert Political Judgment: How Good Is It? How Can We Know?", were disturbing because they seemed to imply that prediction was impossible. But Tetlock's study did not cause him to give up on forecasting entirely — it just convinced him that individual experts were never going to be very good at it. There could be other ways to predict the future, he believed — ones that relied on formulas instead of opinions, and which could be tested, tweaked, and improved rather than merely trusted.

The basic idea behind this kind of prediction is the same one that propels all of science: You create a hypothesis based on your understanding of whatever you're trying to study, test it to see if it fits with reality, and then make adjustments if it doesn't. Science essentially offers predictions: how high a ball will bounce if you drop if off the table; what happens if you mix two chemicals together. This kind of certainty has long been elusive in the fuzzier realms of politics and culture, but an increasing amount of data about how we live — and an ever-improving ability to process it — has changed the ways we can apply that basic insight. Criminologists are crunching vast amounts of crime data to predict where in a given city murders and robberies are likely to take place. Terrorism researchers mine data on attacks for patterns, and turn it into clues about where future attacks are likely to take place.


Geopolitics might seem like a nearly impossible place to apply the scientific method — it would appear too complex and involves too many competing forces — but it's actually proven to be a productive testing ground for ambitious prediction techniques. One influential practitioner is Bruce Bueno de Mesquita, a political scientist at New York University and the author of "The Predictioneer's Game," who built a mathematical model that predicts what leaders around the world will do when they find themselves under domestic and foreign pressure. Bueno de Mesquita's technique, which has been used to make thousands of predictions for the CIA, takes into account a wide range of facts about a given situation — what do the participants want? how influential are they? — to figure out the most likely outcome. Using insights from a branch of math known as game theory, Bueno de Mesquita built a prototype of his model in 1979 that successfully predicted that a politician named Charan Singh would be chosen as the next prime minister of India. Singh was an improbable candidate, and Bueno de Mesquita, who was trained as a specialist on India, was as surprised by the prediction as any other serious expert would have been at the time — and yet the model turned out to be correct.

As of this writing, Intrade was giving Mitt Romney a 75.6 percent chance of becoming the Republican presidential nominee. The liklihood that scientists will observe the Higgs boson particle by the end of 2012: 57 percent.

While he doesn't often publish his predictions anymore — he does most of his prognosticating as a private consultant for businesses and government agencies — Bueno de Mesquita still trains students to use his model, which has been refined over the past 30 years even as it's come under fire from critics for placing too much faith in people's tendency to make rational decisions. During the height of the Arab Spring, he said, his students predicted that Egypt would be ruled by a coalition comprised of the military and the Muslim Brotherhood. More recently, they predicted that if Kim Jong Il were to die, he would be succeeded by Kim Jong Un, who would take steps to mildly loosen the government's hold over the economy. Bueno de Mesquita said his model also predicts that Syria will liberalize slightly — "not become a democracy but loosen the reins maybe 20 percent compared to where they were before the uprising" — and that Libya will be "no better off or worse" than it was under the Gadhafi regime.

In a sense, what techniques like Bueno de Mesquita's try to do is take the element of human judgment out of prediction: The idea is that if you can ignore opinions and find a more systematic way to interpret the facts, you stand a better chance of being right. But what if the data itself consists not of facts but people's opinions? The idea of crowdsourcing knowledge has gained significant attention in recent years, now that it's possible to quickly reach huge numbers of people through the Internet. The premise is simple enough: If you ask enough people, their collective wisdom will average out to something closer to true than any one of them could have provided on his or her own.

In its simplest version, crowdsourcing is essentially a high-tech poll. But a number of researchers are now trying to turn it into something more precise: a system that can detect the best predictors and give weight to their opinions. Earlier this year, a government group within the Office of the Director of National Intelligence known as the Intelligence Advanced Research Projects Activity, or IARPA, launched a competition in which five teams of computer scientists, psychologists, statisticians, and other specialists at universities and private firms around the country try to build the best approach to crowdsourcing the future. The contest will run a total of four years; at the end, a spokesperson for IARPA said, the hope is to have "at least one new method that substantially improves forecast accuracy and that can be deployed to the intelligence community."

A variation on crowdsourcing that doesn't require anyone's active participation is just scouring the information that people voluntarily post online. Suddenly, there is data being generated by millions of people every second: On Twitter, a portion of the population is submitting for the record what they're thinking, feeling, and doing at all times. By slurping up all that data and categorizing tweets according to the mood they seemed to indicate, Johan Bollen and his colleagues were able to build a model based on a correlation between the level of anxiety expressed on Twitter and the performance of the stock market. According to their analysis, an uptick in messages associated with anxious feelings was usually followed three days later by a decline in the market.

Another variation on crowdsourcing — something that draws its inspiration from stock markets — is a prediction market, a virtual marketplace that treats ideas about the future as commodities. Participants bid on something that may or may not happen, and they can earn money — sometimes real, sometimes imaginary — if their predictions prove right. As the market becomes more active, the price of an item presumably becomes a better reflection of the likelihood it will happen. One early attempt to harness prediction markets ended badly: After the Sept. 11 attacks, the Pentagon set up a market to predict the next big terrorist strike, which it canceled after the program provoked outrage in Congress. But the use of prediction markets has proliferated in the years since, and there are a number of companies that specialize in designing them. There are also several popular prediction markets online — most notably Intrade, which was founded in 1999 and hosts scores of markets in categories from entertainment to politics to technology.

Unlike many predictive systems, the Intrade markets generate clear answers about specific events: As of this writing, Intrade was giving Mitt Romney a 75.6 percent chance of becoming the Republican presidential nominee. The chance that the Supreme Court will throw out the individual mandate of Obama's health care plan before the end of 2012: 50 percent. The likelihood that scientists will observe the Higgs boson particle by that date: 57 percent.

What is the future of prediction? Given the intellectual horsepower dedicated to the problem, it's tempting to forecast that we'll continue to get better at it. Over time, our technology is growing more advanced, our capacity to make sense of the numbers is expanding, and our testing of past predictions is becoming more rigorous.

Even if someone finally nails down an incredibly accurate forecasting model, however, it's possible that that tool will stay well out of reach for most people. A prediction system that works is a supremely valuable thing, and its makers don't necessarily have an incentive to share. Bollen and his colleagues, for instance, have commercialized their Twitter model into a form they can sell to hedge funds; over the summer, they entered into an agreement with a British hedge fund that has been using the model as the basis for its investment strategy. The fact is, some of the most sophisticated prediction mechanisms are used in the realm of finance, where they are, predictably enough, closely guarded.

As for the experts on prediction — let's just say they're restrained in their forecasts. University of Massachusetts Amherst professor emeritus P. Geoffrey Allen, who is the chair of the organizing committee for the 32d Annual International Symposium on Forecasting, said that while methods are growing more sophisticated, forecasting remains "an exercise in humility."

But the real question, when it comes to predicting the future of forecasting, may not be whether we can or can't forecast accurately — it's whether we want to. Robin Hanson, an economist at George Mason University and a pioneer of prediction market design, thinks that what's holding back our ability to predict is not technology or a lack of ingenuity. He believes companies and governments already have much of what they need to be a lot better at predicting the future, and that the reason they're not taking more advantage of it is that in many cases, having accurate predictions in hand makes managers, CEOs, and government officials accountable in a way that lots of them don't want to be.

That's because knowing the future can be a scary thing: It means genuinely answering for the costs of our decisions, confronting the likelihood of failure, seeing that arrows point down as often as they point up. When we're offered a look into the crystal ball, it may in fact be human nature to turn away.

"We're two-faced," Hanson said. "We like to talk as though we wanted better forecasts, but often we have other agendas. When the opportunity to know the future presents itself — as, increasingly, it will — we may end up discovering that we'd rather stay in the dark."

Leon Neyfakh is the staff writer for Ideas. E-maio lneyfakh@globe.com.