Polling may never have been less reliable, or more influential, than it is now. Illustration by Matt Chase

“I am who I am,” Donald J. Trump said in August, on the eve of this season’s first G.O.P. Presidential debate, and what he meant by that was this: “I don’t have a pollster.” The word “pollster,” when it was coined, was meant as a slur, like “huckster.” That’s the way Trump uses it. Other candidates have pollsters: “They pay these guys two hundred thousand dollars a month to tell them, ‘Don’t say this, don’t say that.’ ” Trump has none: “No one tells me what to say.”

Every election is a morality play. The Candidate tries to speak to the People but is thwarted by Negative Campaigning, vilified by a Biased Media, and haunted by a War Record. I am who I am, the Candidate says, and my Opponents are flunkies. Trump makes this claim with unrivalled swagger, but citing his campaign’s lack of a pollster as proof of his character, while fascinating, is utterly disingenuous. The Path to Office is long. To reach the Land of Caucuses and Primaries, the Candidate must first cross the Sea of Polls. Trump is a creature of that sea.

Lately, the Sea of Polls is deeper than ever before, and darker. From the late nineteen-nineties to 2012, twelve hundred polling organizations conducted nearly thirty-seven thousand polls by making more than three billion phone calls. Most Americans refused to speak to them. This skewed results. Mitt Romney’s pollsters believed, even on the morning of the election, that Romney would win. A 2013 study—a poll—found that three out of four Americans suspect polls of bias. Presumably, there was far greater distrust among the people who refused to take the survey.

The modern public-opinion poll has been around since the Great Depression, when the response rate—the number of people who take a survey as a percentage of those who were asked—was more than ninety. The participation rate—the number of people who take a survey as a percentage of the population—is far lower. Election pollsters sample only a minuscule portion of the electorate, not uncommonly something on the order of a couple of thousand people out of the more than two hundred million Americans who are eligible to vote. The promise of this work is that the sample is exquisitely representative. But the lower the response rate the harder and more expensive it becomes to realize that promise, which requires both calling many more people and trying to correct for “non-response bias” by giving greater weight to the answers of people from demographic groups that are less likely to respond. Pollster.com’s Mark Blumenthal has recalled how, in the nineteen-eighties, when the response rate at the firm where he was working had fallen to about sixty per cent, people in his office said, “What will happen when it’s only twenty? We won’t be able to be in business!” A typical response rate is now in the single digits.

Meanwhile, polls are wielding greater influence over American elections than ever. In May, Fox News announced that, in order to participate in its first prime-time debate, hosted jointly with Facebook, Republican candidates had to “place in the top ten of an average of the five most recent national polls.” Where the candidates stood on the debate stage would also be determined by their polling numbers. (Ranking in the polls had earlier been used to exclude third-party candidates.) Scott Keeter, Pew’s director of survey research, is among the many public-opinion experts who found Fox News’s decision insupportable. “I just don’t think polling is really up to the task of deciding the field for the headliner debate,” Keeter told me. Bill McInturff doesn’t think so, either. McInturff is a co-founder of Public Opinion Strategies, the leading Republican polling organization; with its Democratic counterpart, Hart Research Associates, he conducts the NBC News/Wall Street Journal poll. “I didn’t think my job was to design polling so that Fox could pick people for a debate,” McInturff told me. Really, it’s not possible to design a poll to do that.

Even if more people could be persuaded to answer the phone, polling would still be teetering on the edge of disaster. More than forty per cent of America’s adults no longer have landlines, and the 1991 Telephone Consumer Protection Act bans autodialling to cell phones. (The law applies both to public-opinion polling, a billion-dollar-a-year industry, and to market research, a twenty-billion-dollar-a-year industry.) This summer, Gallup Inc agreed to pay twelve million dollars to settle a class-action lawsuit filed on behalf of everyone in the United States who, between 2009 and 2013, received an unbidden cell-phone call from the company seeking an opinion about politics. (Gallup denies any wrongdoing.) In June, the F.C.C. issued a ruling reaffirming and strengthening the prohibition on random autodialling to cell phones. During congressional hearings, Greg Walden, a Republican from Oregon, who is the chair of the House Subcommittee on Communications and Technology, asked F.C.C. chairman Tom Wheeler if the ruling meant that pollsters would go “the way of blacksmiths.” “Well,” he said, “they have been, right?”

Internet pollsters have not replaced them. Using methods designed for knocking on doors to measure public opinion on the Internet is like trying to shoe a horse with your operating system. Internet pollsters can’t call you; they have to wait for you to come to them. Not everyone uses the Internet, and, at the moment, the people who do, and who complete online surveys, are younger and leftier than people who don’t, while people who have landlines, and who answer the phone, are older and more conservative than people who don’t. Some pollsters, both here and around the world, rely on a combination of telephone and Internet polling; the trick is to figure out just the right mix. So far, it isn’t working. In Israel this March, polls failed to predict Benjamin Netanyahu’s victory. In May in the U.K., every major national poll failed to forecast the Conservative Party’s win.

“It’s a little crazy to me that people are still using the same tools that were used in the nineteen-thirties,” Dan Wagner told me when I asked him about the future of polling. Wagner was the chief analytics officer on the 2012 Obama campaign and is the C.E.O. of Civis Analytics, a data-science technology and advisory firm. Companies like Civis have been collecting information about you and people like you in order to measure public opinion and, among other things, forecast elections by building predictive models and running simulations to determine what issues you and people like you care about, what kind of candidate you’d give money to, and, if you’re likely to turn out on Election Day, how you’ll vote. They might call you, but they don’t need to.

Still, data science can’t solve the biggest problem with polling, because that problem is neither methodological nor technological. It’s political. Pollsters rose to prominence by claiming that measuring public opinion is good for democracy. But what if it’s bad?

A “poll” used to mean the top of your head. Ophelia says of Polonius, “His beard as white as snow: All flaxen was his poll.” When voting involved assembling (all in favor of Smith stand here, all in favor of Jones over there), counting votes required counting heads; that is, counting polls. Eventually, a “poll” came to mean the count itself. By the nineteenth century, to vote was to go “to the polls,” where, more and more, voting was done on paper. Ballots were often printed in newspapers: you’d cut one out and bring it with you. With the turn to the secret ballot, beginning in the eighteen-eighties, the government began supplying the ballots, but newspapers kept printing them; they’d use them to conduct their own polls, called “straw polls.” Before the election, you’d cut out your ballot and mail it to the newspaper, which would make a prediction. Political parties conducted straw polls, too. That’s one of the ways the political machine worked.