Political science

When the House passed the Flake amendment to cut NSF funding for political science The New York Times (and most other newspapers) did not find the event sufficiently interesting to be worthy of valuable newspaper space. So why then does the editorial page seem so eager to debunk political science as a “science?” We as political scientists have barely recovered from the alleged inferiority complexes we suffer as part of our apparent inability to overcome “physics envy” and now we hear that “political scientists are not real scientists because they can’t predict the future.”

One would almost be tempted to think that the message conveyed in these pieces suits the editorial page editors just fine. Indeed, Stevens explicitly writes that policy makers could get more astute insights from reading the New York Times than from reading academic journals. If this was the purpose of placing the op-ed, then the editorial board has been fooled by what can charitably be described as Stevens’ selective reading of the prediction literature; especially Tetlock’s book. Here is how Stevens summarizes this research:

Research aimed at political prediction is doomed to fail. At least if the idea is to predict more accurately than a dart-throwing chimp.

But Tetlock did not evaluate the predictive ability of political science research but of “experts” who he “exhorted [..] to inchoate private hunches into precise public predictions” (p.216). As Henry points out, some of these experts have political science PhDs but they are mostly not political science academics. Moreover, Tetlock’s purpose was not to evaluate the quality of research but the quality of expert opinion that guides public debate and government advice.

Two points are worth emphasizing. The first is that the media, and especially editorial page editors, make matters worse by ignoring the track record of pundits and indeed rewarding the pundits with personal qualities that make them the least likely to be successful at prediction. Here is how Tetlock summarizes the implications of his research for the media:

The sanguine view is that as long as those selling expertise compete vigorously for the attention of discriminating buyers (the mass media), market mechanisms will assure quality control. Pundits who make it into newspaper opinion pages or onto television and radio must have good track records; otherwise, they would have been weeded out.

Skeptics, however, warn that the mass media dictate the voices we hear and are less interested in reasoned debate than in catering to popular prejudices. As a result, fame could be negatively, not positively, correlated with long-run accuracy.

Until recently, no one knew who is right, because no one was keeping score. But the results of a 20-year research project now suggest that the skeptics are closer to the truth.

I describe the project in detail in my book Expert Political Judgment: How good is it? How can we know? The basic idea was to solicit thousands of predictions from hundreds of experts about the fates of dozens of countries, and then score the predictions for accuracy. We find that the media not only fail to weed out bad ideas, but that they often favor bad ideas, especially when the truth is too messy to be packaged neatly.

The second point is that simple quantitative models generally do better at prediction than do experts, regardless of their education. This is not because these models are that accurate or because experts don’t know anything but because people are terrible at translating their knowledge into probabilistic assessments of what will happen. This is why a simple model predicts 75% of the outcome of Supreme Court cases correctly whereas constitutional law experts (professors) get only 59% right. Since predictive success is not the gold standard for social science, as Stevens would have it, this has not yet led to a call to do away with constitutional law experts or randomly allocate them research funds.