Percentage of two-party vote

UW-Madison doctoral candidate Brad Jones created a model to estimate the true levels of support in the polls for Gov. Scott Walker and challenger Mary Burke in the 2014 Wisconsin gubernatorial race. Results are based on 25,000 simulations. The shaded areas show the middle 90 percent of the simulated values, and they show the uncertainty around the estimate. See below for the methodology.

The model accounts only for support for Burke and Walker, not for third-party candidates or poll respondents who say they are undecided.

Polls in the model

Polling organization Dates polled Sample MoE Burke Walker Margin Public Policy Polling 10/28-30/2014 1,814 LV ±2.3 47 48 Walker +1 YouGov 10/25-31/2014 1,494 LV ±3.4 43 45 Walker +2 Marquette 10/23-26/2014 1,164 LV ±3 43 50 Walker +7 YouGov 10/16-23/2014 3,308 LV ±3 45 46 Walker +1 Pulse Opinion Research 10/20-21/2014 973 LV ±3 49 48 Burke +1 St. Norbert 10/18-21/2014 551 LV ±4.4 46 47 Walker +1 Marquette 10/9-12/2014 803 RV ±3.5 47 47 Tied Gravis Marketing 10/3-4/2014 837 RV ±3 46 50 Walker +4 YouGov 9/20-10/1/2014 1,444 LV ±3 49 48 Burke +1 Marquette 9/25-28/2014 585 LV ±4.1 45 50 Walker +5 Gravis Marketing 9/22-23/2014 908 RV ±3 50 45 Burke +5 Pulse Opinion Research 9/15-16/2014 750 LV ±4 46 48 Walker +2 Marquette 9/11-14/2014 589 LV ±4.1 46 49 Walker +3 We Ask America 9/3/2014 1,170 LV ±3 48 44 Burke +4 YouGov 8/18-9/2/2014 1,473 LV ±4 45 49 Walker +4 Marquette 8/21-24/2014 609 LV ±4.1 49 47 Burke +2 Pulse Opinion Research 8/13-14/2014 750 LV ±4 47 48 Walker +1 Gravis Marketing 7/31-8/2/2014 1,346 LV ±3 47 47 Tied Marquette 7/17-20/2014 549 LV ±4.3 47 46 Burke +1 YouGov 7/5-24/2014 1,968 P 46 47 Walker +1 Marquette 5/15-18/2014 586 LV 45 48 Walker +3 PPP 4/17-20/2014 1,144 RV ±2.9 45 48 Walker +3 Magellan Strategies 4/14-15/2014 851 LV ±3.36 47 47 Tied St. Norbert 3/24-4/3/2014 401 AR ±5 40 55 Walker +15 Marquette 3/20-23/2014 556 LV 44 49 Walker +5 Gravis Marketing 3/17/2014 988 RV ±4 44 49 Walker +5 Pulse Opinion Research 3/10-11/2014 500 LV ±4.5 45 45 Tied Marquette 1/20-23/2014 569 LV 40 52 Walker +12 Marquette 10/21-24/2013 800 RV ±3.5 45 47 Walker +2 PPP 9/13-16/2013 1180 RV ±2.9 42 48 Walker +6

MOE — margin of error; LV — likely voters; RV — registered voters; P — panelists; AR — adult residents

Pulse Opinion Research produces polls for Rasmussen Reports.

Methodology

Model-based poll averaging

Model-based poll averaging provides a principled way to combine the results of many polls over the course of an election. Individual polls are subject to many sources of error, and the model allows us to pick out the signal from the noise.

Sources of error

Sampling error: Given their relatively small sample sizes, poll results are often reported along with their "margins of error" (e.g. plus or minus 2 percentage points). The margin of error reflects the amount of variation we would expect in many similarly conducted polls under ideal conditions.1 The margin of error of a poll decreases with the square-root of its sample size. This means that there are diminishing returns to increasing the sample size of an opinion poll, and given the costs of administering polls, statewide election polls usually aim for between 400 and 800 respondents.

House effects: There are a relatively small number of survey firms that conduct the vast majority of the opinion polling in the United States. These survey houses all run their organizations differently, and the differences in operation have implications for the accuracy and reliability of the results they collect.

Non-response error: Response rates to polls of all sorts have declined dramatically since the inception of modern public opinion polling. It isn't uncommon for telephone polls to have response rates below 10 percent. Surprisingly, these abysmal response rates don't seem to terribly bias the results of the polls2, but they do violate a lot of the assumptions we typically make about the relationship between the poll and its theoretical margin of error.

Sampling frame error: Sampling frame error is related to the slippage between the people who we can actually reach and interview and the people that we would like to interview. In elections, this becomes especially problematic. If we want to accurately predict the results of an election, we are most interested in the opinions of voters. Many survey organizations try to "screen out" unlikely voters from their samples by asking people a series of questions about their voter registration and intentions to turn out on election day. Others will draw their samples from lists of registered voters. To the extent that a survey systematically misses electorally important segments of the voting population, it will provide biased results for the prediction.3

How the model works

The model that we are using to generate predictions about the gubernatorial election uses the aggregate poll results from a variety of organizations to back out the true level of support in the population. The model assumes that the true level of support in the Wisconsin electorate is a slowly moving random walk (e.g., opinion does not move dramatically from one day to the next).

The model also assumes that house effects are stable throughout the election. Using data from past elections (the performance of various polling organizations compared against the election results in the 2012 presidential election, the 2012 recall election and the 2010 gubernatorial race), we are able to generate estimates of any biases that are systematically associated with particular polling organizations.

Making a final prediction

Our model produces estimates of the true level of support for the candidates for every day of the election campaign. The data tells us how far opinion moves in an average day, and using that quantity, we can extrapolate forward and predict the range in which we expect opinion to fall on election day. As more polls are added closer to election day, we can narrow the prediction window.

1 Ideal conditions means 100 percent response and a sampling frame that perfectly overlaps the population of interest. The conditions never obtain in the real world, and margin of error estimates should be taken with a large grain of salt.

2 See for example, Keeter, S., Miller, C., Kohut, A., Groves, R. M., & Presser, S. (2000). Consequences of reducing nonresponse in a national telephone survey. Public Opinion Quarterly, 64(2), 125-148.

3 In 2008, for example, many polling organizations did not include cell phones in their sampling frames. As younger voters were more likely to only be reachable by cell phone and these younger voters were more likely to be Obama supporters, many polls that did not include cell phones underestimated the amount of support for Obama in the electorate.