But in truth, France wasn’t a departure. The polls in the United States and Britain generally worked well. As a new report this past week from the American Association for Public Opinion Research pointed out, national surveys in the U.S. campaign “were generally correct and accurate by historical standards.” Although polling faces real challenges, nobody has repealed the laws of statistics: When polling is done well, it continues to produce reliable results.

Here, nationwide polls accurately predicted Hillary Clinton’s margin of victory in the popular vote. She won by 2.1 percentage points, while the average polling margin on Election Day was Clinton by 3.2 points. In Britain, our poll for a think tank showed a narrow initial preference to “remain” in the European Union across likely voters but an advantage for “leave” — the winning result — after voters listened to arguments on both sides.

AD

AD

Yes, polling in recent years has had to grapple with major challenges, from low response rates to non-response bias , in which some groups choose not to participate (there is evidence of this to some extent among Donald Trump voters). But none of these problems means that the basic science behind survey research has failed or that we can no longer produce high-quality, accurate data. The problem is that too many people are misusing and abusing polls — in three ways in particular.

First, many people treat polls as predictions instead of snapshots in time based on a set of assumptions about who will turn out to vote. Ron Fournier, the publisher of Crain’s Detroit Business, for instance, has argued that Nate Silver got the election wrong because he awarded Trump only a 34 percent chance of winning. Pollsters make judgments about the composition of the electorate based on historical experience and levels of interest in the current election to pull a list of voters to interview. But if those assumptions are wrong, then the polls will be wrong on Election Day. The polls in the Midwest that predicted a Clinton victory generally did not anticipate that, in key industrial states, more rural and exurban white working-class voters would vote than in past presidential contests.

The tendency of elites to underestimate working-class anger is a real and global problem. The United States and most other major democracies are grappling with intense and historic levels of public grievance related to slow growth; income inequality; and resentments over trade, technology and immigration. That has made voter turnout among specific blocs less predictable worldwide. But that’s not a problem with survey research methodology. Rather, it puts a bigger premium on listening to voters and picking up on who is particularly angry or energized.

AD

AD

Second, the rising cost of collecting high-quality data — because of declining response rates and the increased use of cellphones — has led many researchers to cut corners. Rather than spend more to address such problems, some organizations skimp on practices such as call-backs (to people who didn’t answer) or cluster sampling (to make sure small geographic areas are represented proportionately). They may also use cheap and sometimes unreliable data-collection methods such as opt-in online panels or push-button polling (interactive voice recognition) that systematically exclude respondents who primarily use mobile devices.

Indeed, according to “Shattered,” the new book by Jonathan Allen and Amie Parnes, the Clinton campaign relied heavily on “analytics” surveys rather than “old school polling” to track the candidate’s standing because the former were cheaper. Analytics surveys are used to gather data for building voter targeting models. They tend to have large sample sizes but skimp on common practices that make traditional polls more accurate. The book quotes a Clinton pollster acknowledging as much on election night: “Our analytics models were just really off. Time to go back to traditional polling.”

Third, good polling requires good listening. Powerful new techniques in big data modeling make it possible to segment and target voters in ways that were undreamed-of a decade ago. Yet voting is an inherently human activity that defies being completely reduced to formulas. The best polling has always been accompanied by directly listening to people, face to face, in their own words.

AD

AD

Many campaigns and media organizations miss opportunities or succumb to polling errors because they do not invest in simply listening to voters. Focus groups are invaluable, as are other ways of listening, such as conducting in-depth interviews, reading online discussion boards or even systematically monitoring conversations on social media.

Open-ended listening can reveal the need to reword survey questions; for example, our recent focus groups suggest that “globalization” is all but meaningless to many voters. Open listening can cast doubt on things that may have become conventional wisdom in a campaign; for instance, we have worked on many races where the “front-runner” was actually quite weak, but that was more evident in focus groups than in standard survey measures of favorability or job performance. Direct listening can also show that not all polling numbers are created equal: While we did not poll for last year’s Clinton campaign, we conducted many focus groups across the country in which it was clear that voters were willing to overlook or tolerate concerns about Trump, while they could not do the same with Clinton (e.g., “I just don’t trust her”). Direct listening revealed that low favorability ratings meant different things for the two candidates. These are qualitative tactics that many media polls and campaigns skip or skimp on, partly because of the cost.