More importantly, if you look just at last-week polls over time and take the error for each from 1943 to 2017, the mean stays at 2.1 percent. Actually, that’s not quite true—in this century it dropped to 2.0 percent. Polling remains pretty OK. “It is not what we quite expected when we started,” Jennings says.

In 2016 in the US, Jennings says, “the actual national opinion polls weren’t extraordinarily wrong. They were in line with the sorts of errors we see historically.” It’s just that people kind of expected them to be less wrong. “Historically, technically advanced societies think these methods are perfect,” he says, “when of course they have error built in.”

Sure, some polls are just lousy—go check the archives at the Dewey Presidential Library for more on that. Really though, all surprises tend to stand out. When polls casually and stably barrel toward a foregone conclusion, no one remembers. “There weren’t a lot of complaints in 2008. There weren’t a lot of complaints in 2012,” says Peter Brown, assistant director of the Quinnipiac University Poll. But 2016 was a little different. “There were more polls than in the recent past that did not perform up to their previous results in elections like ‘08 and ‘12.”

Also, according to AAPOR’s review of 2016, national polls actually reflected the outcome of the presidential race pretty well—Hillary Clinton did, after all, win the popular vote. Smaller state polls showed more uncertainty and underestimated Trump support—and had to deal with a lot of people changing their minds in the last week of the campaign. Polls that year also didn’t account for overrepresentation in their samples of college graduates, who were more likely to support Clinton.

In a similarly methodological vein, though, Jennings’ and Wlezien’s work has its own limitations. In a culture where civilians like you and me watch polls obsessively, their focus on the last week before election day might not be using the right lens. That’s especially important if it’s true, as some observers hypothesize, that pollsters “herd” in the final days, wanting to make sure their data is in line with their colleagues’ and competitors’.

“It’s a narrow and limited way to look at how good political polls are,” says Jon Cohen, chief research officer at SurveyMonkey. Cohen says he has a lot of respect for the researchers’ work, but that “these authors are telling a story that is in some ways orthogonal to how people experienced the election, not just because of polls that came out a week or 48 hours before Election Day but because of what the polls led them to believe over the entire course of the campaign.”

Generally, pollsters agree that response rates remain a real problem. Online polling or so-called interactive voice response polling, where a bot interviews you over the phone, might not be as good as random-digit-dial phone polls were a half-century ago. At the turn of the century, the paper notes, perhaps a third of people a pollster contacted would actually respond. Now it’s fewer than one in 10. That means surveys are less representative, less random, and more likely to miss trends. “Does the universe of voters with cells differ from the universe of voters who don’t have cells?” asks Brown. “If it was the same universe, you wouldn’t need to call cell phones.”

Internet polling has similar issues. If you preselect a sample to poll via internet, as some pollsters do, that’s by definition not random. That doesn’t mean it can’t be accurate, but as a method it requires some new statistical thinking. “Pollsters are constantly struggling with issues around changing electorates and changing technology,” Jennings says. “Not many of them are complacent. But it’s some reassurance that things aren’t getting worse.”

Meanwhile, it would be nice if polls could start working on ways to better express the uncertainty around their numbers, if more of us are going to watch them. (Cohen says that’s why SurveyMonkey issued multiple looks at the special election in Alabama last year, based in part on different turnout scenarios.) “Ultimately it would be nice if we could assess polls on their methodologies and inputs and not just on the output,” Cohen says. “But that’s the long game.” And it’s worth keeping in mind when you start clicking on those mid-term election polling results this spring.

Counting Votes