The relative lack of change among public pollsters doesn’t mean that the pre-election polls in Virginia or elsewhere are doomed to miss by as much as they did in 2016. By most accounts, the 2016 polling error took a perfect storm: Just about everything that could break Mr. Trump’s way appears to have done so. Next time, it could be the Democrats who beat turnout expectations and sway undecided voters. There’s also no guarantee that the stark educational divide of the 2016 presidential election will be as prominent without Mr. Trump on the ballot, or as important with midterm voters, who tend to be more educated.

But the lack of change hints at a bleak possibility: a mismatch between the scale of the challenge facing the survey research industry and the capacity of many individual public pollsters to respond.

Education Is a Mystery

It might seem obvious that the 2016 election would drive public pollsters to adopt big changes. But many individual public pollsters were reasonably satisfied by their results, even though the industry as a whole seemed to get it wrong.

It’s not completely unreasonable. Last year’s polling error was an odd one. It was almost perfectly distributed across the battleground states to maximize the electoral consequences. There were large errors in a small number of states where Mrs. Clinton had a big lead, like Wisconsin and Michigan. But elsewhere the errors were more typical, or even nonexistent. Virginia is one such state: Mrs. Clinton led by five or six points in all the final polling averages; in the end, she won by 5.3 points.

If pollsters didn’t take any surveys in the Midwest, most or all of their results were probably in the margin of error. Their result might have leaned toward Mrs. Clinton, like everyone else’s, but they could explain that away: There is a lot of evidence that undecided voters broke toward Mr. Trump. And most public pollsters didn’t conduct enough polls, late enough in the race, to be sure of just how well or poorly they really did.

Some pollsters did take enough surveys late enough in the race to merit a re-examination, but they didn’t think education was decisive. Patrick Murphy of Monmouth University, for instance, found that weighting by education would have explained only one percentage point of bias in his surveys.

But perhaps the bigger issue is that education is a mystery: It’s hard for pollsters to even know if they’re getting it wrong, let alone to fix it.