In the last three years, America's military and intelligence agencies have spent more than $125 million on computer models that are supposed to forecast political unrest. It's the latest episode in Washington's four-decade dalliance with future-spotting programs. But if any of these algorithms saw the upheaval in Egypt coming, the spooks and the generals are keeping the predictions very quiet.

Instead, the head of the CIA is getting hauled in front of Congress, making calls about Egypt's future based on what he read in the press, and getting proven wrong hours later. Meanwhile, an array of Pentagon-backed social scientists, software engineers and computer modelers are working to assemble forecasting tools that are able to reliably pick up on geopolitical trends worldwide. It remains a distant goal.

"All of our models are bad, some are less bad than others," says Mark Abdollahian, a political scientist and executive at Sentia Group, which has built dozens of predictive models for government agencies.

"We do better than human estimates, but not by much," Abdollahian adds. "But think of this like Las Vegas. In blackjack, if you can do four percent better than the average, you're making real money."

Over the past three years, the Office of the Secretary of Defense has handed out $90 million to more than 50 research labs to assemble some basic tools, theories and processes than might one day produce a more predictable prediction system. None are expected to result in the digital equivalent of crystal balls any time soon.

In the near term, Pentagon insiders say, the most promising forecasting effort comes out of Lockheed Martin's Advanced Technology Laboratories in Cherry Hill, New Jersey. And even the results from this Darpa-funded Integrated Crisis Early Warning System (ICEWS) have been imperfect, at best. ICEWS modelers were able to forecast four of 16 rebellions, political upheavals and incidents of ethnic violence to the quarter in which they occurred. Nine of the 16 events were predicted within the year, according to a 2010 journal article [.pdf] from Sean O'Brien, ICEWS' program manager at Darpa.

Darpa spent $38 million on the program, and is now working with Lockheed and the United States Pacific Command to make the model a more permanent component of the military's planning process. There are no plans, at the moment, to use ICEWS for forecasting in the Middle East.

ICEWS is only the latest in a long, long series of prediction programs to come out of the Pentagon's way-out research shop. Back in the early 1980s, products from a Darpa crisis-warning system program allegedly filled President Reagan's daily intelligence briefing, with uncertain results. In the late '80s, analyst Bruce Bueno de Mesquita began his modeling work. According to The New York Times Magazine, Bueno de Mesquita picked Iranian leader Ayatollah Khomeini's successor five years ahead of time, and forecast Pakistani President Pervez Musharraf's ouster – to the month.

One former CIA analyst claims that Bueno de Mesquita was "accurate 90 percent of the time." It's an assertion that no one – inside the government or out – has independently verified. Perhaps someone at the CIA really is relying on the model, and it really is that good. That hasn't stopped the agency from swinging and missing for decades on Middle East intelligence estimates.

In 2002, the military's National Defense University began tapping Abdollahian and his "Senturion predictive political simulation model" to forecast unfolding events in Iraq. According to Abdollahian, the model accurately predicted that Bush administration favorite Ahmed Chalabi would prove to be a lousy ally, and that both Sunni and Shi'ite insurgencies would grow to seriously challenge U.S. forces.

Both Abdollahian and Bueno de Mesquita take a similar approach to the prediction game. They interview lots and lots of experts about the key players in a given field. Then they program software agents to replicate the behavior of those players. Finally, they let the agents loose, to see what they'll do next. The method is useful, but limited. For every new situation, the modelers have to interview new experts, and program new agents.

A second approach is to look at the big social, economic and demographic forces at work in a region – the average age, the degree of political freedom, the gross domestic product per capita – and predict accordingly. This "macro-structural" approach can be helpful in figuring out long-term trends, and forecasting general levels of instability; O'Brien relied on it heavily, when he worked for the Army. For spotting specific events, however, it's not enough.

The third method is to read the news. Or rather, to have algorithms read it. There are plenty of programs now in place that can parse media reports, tease out who is doing what to whom, and then put it all into a database. Grab enough of this so-called "event data" about the past and present, the modelers say, and you can make calls about the future. Essentially, that's the promise of Recorded Future, the web-scouring startup backed by the investment arms of Google and the CIA.

But, of course, news reports are notoriously spotty, especially from a conflict zone. It's one of the reasons why physicist Sean Gourley's much heralded, tidy-looking equation to explain the chaos of war failed to impress in military circles. Relying on media accounts, it was unable to forecast the outcome of the 2007 military surge in Iraq.

ICEWS is an attempt to combine all three approaches, and ground predictions in social science theory, not just best guesses. In a preliminary test, the program was fed event data about Pacific nations from 2004 and 2005. Then the software was asked to predict when and where insurrections, international crises and domestic unrest would occur. Correctly calling nine of 16 events within the year they happened was considered hot stuff in the modeling world.

But it doesn't even meet the threshold that O'Brien, the Darpa program manager and long-time military social scientist, set for strong models. If "we cannot correctly predict over 90% of the cases with which our model is concerned," he writes, "then we have little basis to assert our understanding of a phenomenon, never mind our ability to explain it."

Photo: AJE/Flickr

See Also: