Other influential prediction gurus—like Nate Silver at FiveThirtyEight and Nate Cohn of The New York Times’s Upshot—also predicted Clinton wins though in less absolutist terms (and without promises to eat insects). Nonetheless, watching Cohn’s forecaster on The New York Times front page for the few crucial hours of election night was the intellectual equivalent of suffering severe whiplash. At the beginning of the night, based on pre-election polls, the Upshot’s forecast called for an 85 percent chance for a Clinton victory (“about the same probability that an N.F.L. kicker misses a 37-yard field goal” the website helpfully explained). In less than a few hours, as actual results came in, the forecast flipped to a 95 percent chance of victory for Trump.

Devoted readers of The Upshot were left utterly flummoxed: How can the probability of Hillary winning go in just a couple hours from the chances of an N.F.L. kicker getting a routine field goal to less than that of drawing an ace from the top of a shuffled deck of cards? The turnout gave readers uninstructed in the technical science of statistics the impression of funny business. Combined with the emotional disappointment experienced by many, this has made for an emotional backlash against pollsters.

And already the robust questioning of the various polling and forecasting methodologies has begun. The American Association for Public Opinion Research has put together a committee to study what went wrong. Wonkish news outlets like Politico have run full articles calling out the failure of a predictive political science to materialize and offering potential reasons why. Most of the questioning has been over whether the models themselves used to forecast were the right ones. Were the right variables incorporated? Should there be less reliance on historical factors and more on demographic ones? Should economics figure into a model? Should the University of Southern California and Los Angeles Times’ use of a “panel model,” which relies on the same respondents longitudinally across time, instead be adopted?* After all, they did better than the other forecasters.

Inevitably, higher education and the mainstream in political science will follow the same line of self-criticism. And no one would be surprised if the American Political Science Association (a more-than 13,000-member association of professional, academic political scientists) invites Nate Cohn, Nate Silver, or Sam Wang to a panel discussion next summer to argue about how to make a model and tweak technical differences.

Yet when this happens, the larger philosophical questions—about whether the study of politics is indeed a science—will go unasked, and Americans will have missed a massive opportunity at self-correction in academia, the media, and society at large. To be clear, the problem is by no means mass surveys and polling (though these can always be improved). Polling, if used properly, can be an extremely helpful tool for gaining snapshots of widespread beliefs and practices within society. The problem comes with forecasting—or the attempt to report predictions as supposedly scientific or quasi-scientific findings akin to work that happens in the natural sciences.