In the run-up to the 2016 US presidential elections, a scuffle broke out about the election forecasts. While FiveThirtyEight predicted a one-in-three chance of a Trump win, other forecasts suggested a near-certain Clinton victory. “[T]he most popular and widely quoted website out there, fivethirtyeight.com, has something tragically wrong with its presidential prediction model,” wrote Evan Cohen at Huffington Post. “If you want to put your faith in the numbers, you can relax. She’s got this,” wrote Ryan Grim, on November 5, in a screed about FiveThirtyEight’s methods.

The widespread failure at predicting the outcome of the election has prompted a lot of analysis. Among the explanations is a simple one: polls are hard. The only way to know for sure how a bunch of people will vote is to hold the actual election. Everything else will involve extrapolating from a sample of people to the whole population, and that’s an imperfect process. A paper in Nature Human Behaviour this week finds some evidence for a method that might improve accuracy: instead of asking people how they plan to vote, ask them how their friends and family will.

You don't have to lie for your friends

The standard version of election polls involves asking a simple question: which candidate do you plan to vote for? But there are a range of other questions that can be asked. One of these is asking people to express their voting intention as a probability: what is the percent chance that you would vote for a particular candidate?

Another option is to ask people who they think will win the election. An analysis of 217 surveys from the 1930s onward found that these “voter expectation” questions were more accurate at predicting election results than a range of other methods. Mirta Galesic and her colleagues suggest a possible reason for that in this week’s new paper—people may know how their social circles will vote and base their predictions on that knowledge.

Why would this be better than just asking people how they, personally, plan to vote? People might be more honest about the intentions of others, Galesic and colleagues suggest. Those polled can also give information about people who weren’t contacted, which can help to reduce some of the problems that come with generalizing from a small sample to a big population.

So, Galesic and her colleagues looked at the usefulness of social circles in the 2016 US presidential election and the 2017 French presidential election. They included two new questions in large surveys, asking people how many of their social contacts they thought would vote and how many would vote for a certain candidate.

In the US, they included these questions in two different polls, which allowed them to compare the results to a variety of other questions, like their own intentions and their expected winner. In the French election, they compared the standard own-intention question to a question about social circles' intentions.

More data is better data

On almost every measure, in both France and the US, results from the social circle questions came closer to predicting actual vote shares for each candidate, and these questions delivered results that were also closer to actual voter turnout rates. They were better even than the voter-expectation question, which has provided high accuracy throughout the past century.

Social circle questions also come with an additional advantage over voter-expectation questions. Asking people who they think will win the election doesn’t easily allow much drilling down at the state level. Asking people who their friends are going to vote for means that it’s possible to try to predict each state individually. In the US, where the outcome of the election isn't determined by popular vote, this is essential in predicting the election outcome.

Here, the social circle question results weren’t as clear-cut of a win. Aggregated data from thousands of statewide polls correctly predicted 90 percent of all states’ election results, whereas the social circle questions only predicted around 70 percent correctly. But when it came to swing states in particular, social circle questions did better than aggregated polls.

It’s a robust paper and a “very interesting set of results,” said Michael Traugott, a political scientist who wasn’t involved with this work. Andreas Graefe, who has researched the accuracy of voter-expectation questions, agreed that the results were a “useful contribution.”

But both Traugott and Graefe caution against getting too excited about this method on the basis of just two elections. “It’s the first set of results to show this, and it would be good to have replications,” said Traugott. Graefe added that results from two elections are a good start, but “the relative accuracy of methods varies strongly across elections.” What worked well in these two elections might not translate so well to different circumstances.

Traugott also pointed to the very small per-state sample size in these surveys, with sometimes as few as 27 people per state. This makes it difficult to assess how well these results will apply at the state level in future elections.

It’s also important to think beyond accuracy, Graefe remarked. Despite the demonstrated accuracy of voter-expectation polls and the improvements in accuracy that come with combining multiple methods, media outlets still tend to focus on own-intention polls. Why? Graefe doesn’t know, but he suggests that the appeal may be in their relative volatility. Voter-expectation polls have an element of boring stability, and unchanging polls don’t make for compelling news. And we should also be thinking about how people understand poll results, he added: “How can we communicate forecasts in a way that people understand them better?”

Nature Human Behaviour, 2018. DOI: 10.1038/s41562-018-0302-y (About DOIs).