But Internet use correlates inversely with age and voting habits, making this a more severe problem in predicting elections. While all but 3 percent of those ages 18 to 29 use the Internet, they made up just 13 percent of the 2014 electorate, according to the exit poll conducted by Edison Research. Some 40 percent of those 65 and older do not use the Internet, but they made up 22 percent of voters.

A much bigger issue is that we simply have not yet figured out how to draw a representative sample of Internet users. Statisticians make a primary distinction between two types of samples. Probability samples are based on everyone’s having a known chance of being included in the sample. This is what allows us to use mathematical theorems to confidently generalize from our sample back to the larger population, to calculate the odds of our sample’s being an accurate picture of the public and to quantify a margin of error.

Almost all online election polling is done with nonprobability samples. These are largely unproven methodologically, and as a task force of the American Association for Public Opinion Research has pointed out, it is impossible to calculate a margin of error on such surveys. What they have going for them is that they are very inexpensive to do, and this has attracted a number of new survey firms to the game. We saw a lot more of them in the midterm congressional election in 2014, in Israel and in Britain, where they were heavily relied on. We will see them more still in 2016.

The other big problem with election polling, though not a new one, is that survey respondents overstate their likelihood of voting. It is not uncommon for 60 percent to report that they definitely plan to vote in an election in which only 40 percent will actually turn out. Pollsters have to guess, in effect, who will actually vote, and organizations construct “likely voter” scales from respondents’ answers to maybe half a dozen questions, including how interested they are in the election, how much they care who wins, their past voting history and their reported likelihood of voting in this particular election. Unfortunately, research shows there is no single magic-bullet question or set of questions to correctly predict who will vote, leaving different polling organizations with different models of who will turn out.

This has become a bigger problem lately. Scott Keeter, a former colleague of mine who is now the director of survey research at Pew, told me that “as coverage has shrunk and nonresponse has grown, forecasting who will turn out has become more difficult, especially in sub-presidential elections. So accuracy in polling slowly shifts from science to art.”

The problem here of course is that actual turnout is unknown until the election is over. An overestimation of turnout is likely to be one of the reasons the 2014 polling underestimated Republican strength. Turnout in that midterm election was the lowest since World War II; fewer than 40 percent of eligible voters cast ballots. Since Democrats are on average less well educated and less affluent than Republicans, and less likely to vote, a low turnout would be disproportionately Republican, as fewer occasional voters (who are disproportionately Democratic) participated. And of course we don’t know what to expect for the general election in 2016.

So what’s the solution for election polling? There isn’t one. Our old paradigm has broken down, and we haven’t figured out how to replace it. Political polling has gotten less accurate as a result, and it’s not going to be fixed in time for 2016. We’ll have to go through a period of experimentation to see what works, and how to better hit a moving target.