Exit polls

An exit poll is a poll taken of voters as they leave their polling places, asking them who they voted for. Unlike pre-election polls, which are meant to predict the vote count, exit polls attempt to match the true vote by asking people who actually did vote. A sufficiently-large, random sample of voters (for example, every 5th voter to walk out) is interviewed. This should theoretically make them accurate representations of the vote, but it isn't always that simple.





In the US, exit polls are conducted by the National Election Pool (NEP) , a coalition of several major media outlets: ABC News, the Associated Press, CBS News, CNN, Fox News, and NBC News. They contract with Edison Research , which actually conducts the poll. News organizations use the exit polls to add to their election coverage, projecting how it will go and whether to call a race.





Joe Lenski, the executive vice president of Edison, explained the process of exit polling in a Washington Post interview . They start by picking a series of sample precincts and sending people there to interview voters. The response rate from voters is about 40-50%, fairly good compared to other types of polling. Interviews are conducted three times per day: late morning, afternoon, and poll closing. The interview results are weighted by the demographics recorded, and the turnout in the various precincts. Once the official election results come out, exit polls are forced ("adjusted") to match them.





Provided that the interview is a representative sample of the electorate, the exit polls should match how people voted fairly well. Ensuring a representative sample begins with the precinct selection, picking ones that properly represent the voting population. Certain demographics are also weighted differently to account for selection bias, such as some groups being more likely to respond than others. All of this leads to weighting results as more data comes in, as described above. Then there's the adjustment after polls close to match the official results.





Adjusting exit polls based on official results is criticized by people like Richard Charnin, who believes that it's wrong to assume the official results are correct. Simply mimicking the official results, in Charnin's view, can mask errors in the electoral process that the exit polls might have revealed. Lenski, however, argues that the purpose of exit polls isn't just predictions, but having accurate demographics of who voted, and thus, it makes sense to adjust them based on official counts. Since the exit polls include demographic data, adjusting them can reveal the demographic breakdowns that the official counts don't record.









The adjustment process itself is also unclear. Lenski didn't reveal how it was done in his interview, simply saying that the exit polls are adjusted to meet official results. On its surface, Lenski's argument makes sense, but adjustments can lead to counterintuitive results . In the 2004 election, 122.3 million people voted overall, and the adjusted exit poll showed that 43% of the electorate (52.6 million people) had voted for George W. Bush in 2000. But Bush only had 50.5 million votes total in 2000 , and that's not counting his supporters who died or didn't return to vote for him. Forcing the exit poll to match the official 2004 vote created an impossible result. John Kerry was, in fact, leading Bush in the unadjusted exit poll. The conclusion Charnin draws is that to match fraudulent official results, the exit poll had to be adjusted in a way that defied reality.The adjustment process itself is also unclear. Lenski didn't reveal how it was done in his interview, simply saying that the exit polls are adjusted to meet official results. Richard Charnin and Doug Hatlem have observed that throughout the night, the number of respondents to an exit poll sometimes grows until it matches the official vote. This would indicate that Edison either invents respondents that they didn't interview to force the exit polls to meet official results, or has data coming in after polls close that always matches the official vote exactly.





Most people, no matter what their political affiliation is, should be suspicious of at least 2.1 million Bush voters appearing out of thin air in the exit poll. In order for the official results to make sense, the NEP had to alter demographics to the point of impossibility. One single example of an inconsistency, however, doesn't prove that election fraud is systemic, or even happening at all. But there are many other suspicious examples of discrepancies between exit polls and official results.







Past examples of discrepancies

These examples aren't meant to prove beyond a doubt that election fraud occurred. The hope that anyone who looks at them will realize there's more than meets the eye in our elections. Enough strange occurrences in an election at least merit investigation, and when they consistently happen alongside exit poll discrepancies, those discrepancies deserve a closer look too.

US presidential election (2000)

Ukrainian presidential election (2004)

Incumbent Ukrainian president, Victor Yanukovych, faced reelection in 2004 . No candidate managed to break 50% in the first round of voting, so the top two candidates, Victor Yanukovych and Victor Yushchenko, faced each other in a run-off election.









Belief that the election was rigged resulted in the Orange Revolution, a series of protests and demonstrations in Ukraine. Exit polls coming out of the election gave Yushchenko an 11% lead, but according to official results, Yanukovych won by 3%. Several irregularities were recorded , with international election observers for the OSCE saying that it "did not meet international standards". Between the first and second rounds, turnout in regions supporting Yushchenko was stayed the same or went down, while turnout in regions supporting Yanukovych dramatically increased, sometimes to greater than 100%. Observers alleged that supporters of Yanukovych voted absentee multiple times. Other observed issues included ballot stuffing, voter intimidation, and massive new voter registrations right before the election.





International reaction to the election was swift against Yanukovych. The EU condemned the results as fraudulent, as did US President Bush and Secretary of State Colin Powell . All of the countries refused to accept the election's legitimacy, and called for an investigation of it. The Ukrainian Supreme Court invalidated the election results, and a re-run was held. This time, Yushchenko won with 52% of the vote, beating Yanukovych by 7.81%. International observers said that the re-run was fairer than the original run-off election.





While not an example in the US, the 2004 Ukrainian election is an example of discrepancies between exit polls and official results reflecting election fraud. International observers found several instances of fraud and refused to accept Yanukovych's apparent victory over Yushchenko, which was a complete reversal of what the exit polls projected. Pressure from the EU and US, combined with protests in Ukraine, led to the results being overturned and fairer elections being overseen. The fair elections had Yushchenko winning, just as the exit polls projected the first time.





US presidential election (2004)

President George W. Bush was also an incumbent seeking reelection in 2004. He ran against John Kerry, and with the electoral college math , won by a narrow margin of 286 to 251. The closeness of the race meant that a swing state flipping could have changed the winner, and one swing state in particular had election controversy surrounding it: Ohio. Ohio, won by Bush, was worth 20 electoral votes, and if Kerry had won instead, the final vote would have been 271 to 266 for Kerry.









Looking at Liberal commentators, such as Thom Hartmann of the Big Picture RT , claimed that a GOP-tied company hijacked the vote tabulation for Ohio. Exit polls from Ohio gave Kerry a lead over Bush, and for much of election night, the Ohio vote matched the exit polls. At 11:14 PM, the vote counting servers went down, and were rerouted to a backup system in Chattanooga, Tennessee. The company handling the backup, SmarTech, did business for the Republican Party, and was alleged to have rigged the vote for Bush. After the tabulation came back online, Kerry's lead had been lost to Bush, a clear disparity with the exit polls.Looking at hosting records for the reporting site confirms that SmarTech became the results provider on election night. The website is normally hosted by OARNet , an Ohio-based company that provides Internet services to the Ohio government. But around November 3 (or late November 2, which was election day), SmarTech became the provider for the Ohio elections reporting. Could it have used that period to influence the 2004 election?





This controversy became the subject of a court case, King Lincoln Bronzeville v. Blackwell , filed against Ohio Secretary of State Ken Blackwell. A document in the case showed that Blackwell contracted with GovTech Solutions, owned by Michael Connell, to develop the election night reporting system. Connell, an IT manager for Bush and Karl Rove, brought in both SmarTech and Triad (which made the county tabulators) for this. SmarTech was only meant to provide a backup system if Ohio's primary system crashed, but Connell testified that to the best of his knowledge, "it was not a failover situation", as did Blackwell's IT specialist, who was sent home that night at 9 PM.













Critics have contended that this attack would only have impacted the unofficial election night results, since the counties had to fax their tabulator results to the state offices as the official ones. But Spoonamore argued that since SmarTech sat in between the county and state tabulators, it also had the potential to remotely access the county tabulators and change their results. If the county tabulator results were modified, the faxed results would also get altered. This remote access would require the county tabulators to be purposefully modified for that. Interestingly, Triad removed hard drives from them soon after, potentially an attempt to hide evidence of a man-in-the-middle setup. Stephen Spoonamore, a network security engineer, declared in a sworn affidavit that he believed the Ohio vote was altered with a man-in-the-middle attack , where a computer inserts itself between two other computers to intercept and modify the data sent between them. Around 11 PM, several counties suddenly began reporting radically different ratios of Bush to Kerry votes, in a way that favored Bush. Spoonamore stated that this shift could be most likely explained by a malicious system inserted between the county tabulators and the Ohio Secretary of State's office. After analyzing how SmarTech's backup system was set up, he affirmed that the company could have altered the Ohio results however they wanted. An earlier affidavit from Spoonamore explains how the attack worked. Election night reporting involved the central tabulators from each county (Computer A) transmitting their results over the Internet to a statewide tabulator in the Secretary of State's office (Computer B), which added up all the votes from all the counties. In the network setup, however, a third computer, controlled by SmarTech, was placed between A and B. This computer would have been able to see vote totals as they came in, and alter them before passing them along to the Secretary of State's office.Critics have contended that this attack would only have impacted the unofficial election night results, since the counties had to fax their tabulator results to the state offices as the official ones. But Spoonamore argued that since SmarTech sat in between the county and state tabulators, it also had the potential to remotely access the county tabulators and change their results. If the county tabulator results were modified, the faxed results would also get altered. This remote access would require the county tabulators to be purposefully modified for that. Interestingly, Triad removed hard drives from them soon after, potentially an attempt to hide evidence of a man-in-the-middle setup.









Phillips also filed Dr. Richard Hayes Phillips performed an independent analysis of the 2004 election in his book "Witness to a Crime" . He examined thousands of ballots and voting books to ascertain what happened. Phillips found 0 irregularities in vote counts before the rerouting to SmarTech at 11:14 PM, but of the 14 counties reported afterwards, every single one had irregularities favoring Bush.Phillips also filed an affidavit in the King Lincoln case, listing dozens of irregularities that occurred during the election. He concluded by saying, "Having personally examined 126,000 ballots, 127 poll books, and 141 voter signature books from 18 counties in Ohio, and having examined many other election records as well, it is my conclusion that there is so much evidence of ballot alteration, ballot substitution, ballot box stuffing, ballot destruction, vote switching, tabulator rigging, and old-fashioned voter suppression, that the results of the 2004 presidental election, in all likelihood, have been reversed".





None of this proves that election fraud took place, but the 2004 election was questionable enough to be suspect. SmarTech, a partisan GOP company, was contracted to provide a backup for Ohio's vote tabulation and results reporting. Their systems were connected to county and state level tabulators, such that they could have potentially altered the results if they desired. And at 11:14 PM, Ohio's vote tabulators were run through SmarTech, at which point Kerry's lead shown by the exit polls flipped to Bush. The only independent analysis showed no irregularities before 11:14 PM, and many favoring Bush afterwards. The 2004 US election, thus, is another example of exit poll discrepancies pointing to election irregularities.







"Red shifting" in the US

The exit poll discrepancies in 2000 and 2004 reflect a larger phenomenon called "red shifting," as noted by Richard Charnin and others. In presidential elections from 1988 to 2008, Republicans have overperformed their exit poll margins in most of the official state results, often leading to a victory that the exit poll didn't show. Many of the official results exceeded the margin of error of the exit poll, and nearly all of these shifts were in favor of Republicans, hence the term "red shift". Unless there's a systematic polling bias against Republicans, the probability of all these events is 0%, based on simple statistics.









Figure 2: 2004 national exit poll, on a normal curve. John Kerry got 51.8% in the exit poll, with a margin of error of 3.14%. There is a 95% chance his true share was inside that margin. More importantly, there is an equal chance of his true share being higher or lower than 51.8%. Kerry's official share was 48.3%, outside the error margin. Exit polling, as described above, is designed to choose a random sample of voters representing the voting population. Because of this, its probability density assumed to fit a normal distribution , often referred to by its appearance as a "bell curve." The poll result has the highest probability (top of the curve), and the probability of a deviation drops off equally in both directions. Polls, of course, can't be absolutely certain that they represent the population, but a normal distribution allows one to be 95% confident that they're accurate within a certain margin . Given fair elections, any deviations would be caused by random statistical errors, meaning that a single candidate/party wouldn't consistently benefit.







Of the 274 state elections, from 1988 to 2008, with exit poll data available, the official Republican vote exceeded the exit polls in 226 of them. Deviations from exit poll results shouldn't favor one party or the other. In a fair electoral system, about 50% of the deviations should help the Republicans, and the other 50% should help the Democrats. 226/274 going to the advantage of Republicans is just like 226/274 coin flips returning heads. We can calculate the probability of that with the BINOMDIST(226, 274, 0.5, FALSE) = 3.43 * 10^-29, virtually a 0% probability. Of the 274 state elections, from 1988 to 2008, with exit poll data available, the official Republican vote exceeded the exit polls in 226 of them. Deviations from exit poll results shouldn't favor one party or the other. In a fair electoral system, about 50% of the deviations should help the Republicans, and the other 50% should help the Democrats. 226/274 going to the advantage of Republicans is just like 226/274 coin flips returning heads. We can calculate the probability of that with the binomial distribution = 3.43 * 10^-29, virtually a 0% probability.





There were 57 state elections where the winner differed between the exit poll and the official results. 55 of those were Republicans winning when the exit poll said that they lost, and only 2 were Democrats winning when the exit poll said they lost. Like above, the probability of exit poll results flipping for either party should be 50-50 in a fair election. BINOMDIST(55, 57, 0.5, FALSE) = 1.11 * 10^-14, another 0% probability.





POISSON(123, 7, FALSE) is once again 0%. The margin of error was exceeded in 126 of the 274 polls, 46% of the time, which shouldn't happen given 95% confidence in the results. Nor should 123 of the 126 occurrences favor Republicans, with only 3 favoring Democrats. The margin of error should only be exceeded 5% of the time, or in 14 of the polls, and the advantage should again divide evenly between Republicans and Democrats. The Poisson distribution determines how likely it is that a certain number of events occurs when the number that should occur is known. 274 * 5% * 50% of the polls, or 7 of them, should exceed the margin of error in a way that favors Republicans, but 123 actually did.is once again 0%.





Simple statistical analysis of them implies that the official results are heavily skewed towards Republicans. The probability of so many of the exit polls flipping to Republicans in the way they do cannot happen by random chance, and implies highly altered election probabilities favoring Republicans. But, of course, there may simply be sampling error in the exit polling that constantly underestimates Republican support. If the entire method of exit polling is systematically flawed in that way, a consistent red shift becomes less suspicious.





Democrats had lower response rates. And since 1988,



Even if one isn't entirely convinced by exit polling, the statistical improbability of red shifting should be enough to make them reconsider the official counts. Statistical irregularities to the degree described above make fraud a legitimate possibility, worthy of investigation.



The common theory about sampling error is that Republicans are less willing to respond to exit polls than Democrats, skewing the polls to the left. Evidence for this theory, however, is lacking. Edison proposed the idea in 2004, but US Count Votes' analysis found the theory implausible, with Edison's data possibly even suggestinghad lower response rates. And since 1988, party identification for Democrats and Republicans has remained quite close . It also stands to reason that since Edison has had decades to perfect their practice, it's unlikely that exit pollsters would notice a red shift and continue to underweight Republican voters.Even if one isn't entirely convinced by exit polling, the statistical improbability of red shifting should be enough to make them reconsider the official counts. Statistical irregularities to the degree described above make fraud a legitimate possibility, worthy of investigation.

US Democratic primary (2008)





Indeed,



It's not unreasonable to think that Hillary Clinton simply had a late surge that pre-election polls didn't detect. The nomination race doesn't happen in a vacuum, and what candidates do or say can easily change people's opinions. In the days leading up to the race, Clinton issued a passionate defense of her candidacy, and



One potential argument against the validity of the exit polls is the



A suspicious occurrence that popped up in New Hampshire was



A possible explanation for this discrepancy is that the precincts using hand-counting instead of machine-counts happened to favor Obama. Since



Also telling is the fact that these discrepancies mainly only affected Obama and Clinton, with overall machine vs. hand discrepancies around 3-5%. No other Democratic candidate had similar discrepancies, and for almost all of them, it was less than 1%. Since Clinton and Obama had more votes than their competitors, it would have actually taken more differing votes to make a noticeable impact in the percentages, making it all the more strange that such a discrepancy exists.



Once again, this doesn't outright prove election fraud occurred, but it should cast reasonable doubt on the results. Obama was ahead in the polls leading up to the election by about 8 points, and ahead in the exit poll by about 8 points, but lost to Clinton officially by 3 points. This was caused by discrepancies between machine counts and hand counts, which were not due to demographics, and did not occur for other candidates. Obama actually led the hand count by 4 points, which puts him closer to the pre-election and exit polls. Exit poll discrepancies and suspicious election circumstances again go hand in hand.





That's often what we end up when we talk about exit poll discrepancies: a lot of circumstantial evidence, but no real smoking guns. Indeed, nothing written above proves that fraud actually happened in any of the elections. But it does show that the quality of elections shouldn't be taken for granted.



Enough questionable circumstances should make us all skeptical, and when they're so commonly linked to exit poll discrepancies, perhaps those discrepancies can tell us something after all. Suspicion of fraud, though, doesn't explain how it could happen. In the next article, we'll see how it could.



Other posts on election fraud Back in 2008, Hillary Clinton, thought to be a shoe-in for the Democratic nomination, faced an unexpectedly strong challenge from Barack Obama. Obama managed to win Iowa over Clinton by 9 points , leaving her with a surprising 3rd place finish. Pundits began to declare Clinton's campaign over, expecting Obama to win New Hampshire and ride his momentum to the nomination.Indeed, most pre-election polls showed Obama ahead by about 8 points, but on election night, Hillary Clinton won by 3 points in a stunning upset. Clinton's upset, however, was not only a complete reversal of pre-election polls, but a complete reversal of unadjusted exit polls , where Obama was ahead by around 8 points.It's not unreasonable to think that Hillary Clinton simply had a late surge that pre-election polls didn't detect. The nomination race doesn't happen in a vacuum, and what candidates do or say can easily change people's opinions. In the days leading up to the race, Clinton issued a passionate defense of her candidacy, and a show of emotion could have garnered her last-minute sympathy from voters. But that fails to explain the major discrepancy with the exit polls.One potential argument against the validity of the exit polls is the Bradley effect . It posits that people will claim to support a black candidate not to seem racist, despite having chosen someone else. Many pundits speculated on whether the Bradley effect had something to do with Clinton's apparent upset, overinflating Obama's standing in the polls. Analysis of the Bradley effect at the time , however, concluded that it had been mostly been gone since 1996. Even if the Bradley effect returned in 2008 for Obama, a 11 point swing based on it is pretty unlikely. And clearly, the Bradley effect didn't ultimately stop Obama from winning the nomination.A suspicious occurrence that popped up in New Hampshire was discrepancies between hand-counted and machine-counted ballots . Clinton won machine-counted votes 40% to 36%, while Obama won hand-counted votes 39% to 35%. Interestingly, the margins between them in both machine and hand counts were roughly 4%, just with the winner reversed. The fact that two different means of counting votes produced different, almost inverted, results is strange, especially when one of them (machine counting) relies on black box technology with known security issues.A possible explanation for this discrepancy is that the precincts using hand-counting instead of machine-counts happened to favor Obama. Since Obama did better in rural areas than big cities , and the more rural towns were more likely to use hand counts, that is often the explanation given. But even within just rural towns, for instance, the discrepancy existed. Obama doing better in rural towns isn't suspicious, but doing better in rural towns except when machine counts get used is. Machine and hand counts should be a close match, and they aren't. A further analysis shows that no demographic variable can account for the discrepancy, implying the use of voting machines is the deciding factor.Also telling is the fact that these discrepancies mainly only affected Obama and Clinton, with overall machine vs. hand discrepancies around 3-5%. No other Democratic candidate had similar discrepancies, and for almost all of them, it was less than 1%. Since Clinton and Obama had more votes than their competitors, it would have actually taken more differing votes to make a noticeable impact in the percentages, making it all the more strange that such a discrepancy exists.Once again, this doesn't outright prove election fraud occurred, but it should cast reasonable doubt on the results. Obama was ahead in the polls leading up to the election by about 8 points, and ahead in the exit poll by about 8 points, but lost to Clinton officially by 3 points. This was caused by discrepancies between machine counts and hand counts, which were not due to demographics, and did not occur for other candidates. Obama actually led the hand count by 4 points, which puts him closer to the pre-election and exit polls. Exit poll discrepancies and suspicious election circumstances again go hand in hand.That's often what we end up when we talk about exit poll discrepancies: a lot of circumstantial evidence, but no real smoking guns. Indeed, nothing written above proves that fraud actually happened in any of the elections. But it does show that the quality of elections shouldn't be taken for granted.Enough questionable circumstances should make us all skeptical, and when they're so commonly linked to exit poll discrepancies, perhaps those discrepancies can tell us something after all. Suspicion of fraud, though, doesn't explain how it could happen. In the next article, we'll see how it could.

On the night of April 19, the day of the New York primary, CNN's exit polling showed Hillary Clinton leading Bernie Sanders by only 52% to 48% . The close race was surely relieving to see for Bernie supporters, telling them that they hadn't lost by a blowout in New York. But the actual results gave Clinton a much more substantial lead, 58% to 42% This kind of discrepancy between exit polls and official results wasn't just limited to New York. Richard Charnin, a mathematician and blogger, documented discrepancies in several states: Super Tuesday states Michigan , and the March 15 states . In MA and IL, the winner flipped from Bernie in the exit polls to Hillary in the official results. In OH, a 4-point loss from the exit polls became a 14-point loss officially. In MI, Bernie did better in the exit polls than what was reported officially. And in multiple Southern states, Clinton far outperformed the pre-election and exit polls.So what happened? Were the exit polls wrong? Richard Charnin claims it's the opposite: the exit polls tell the true story, and the official results are wrong.A lot of Bernie supporters are now calling many of the elections fraudulent. I find them highly suspicious myself, and I am a Bernie supporter, but more pressing to me is the integrity of our democracy. If the exit polls reveal a pattern of vote counts being manipulated from how people actually voted, the foundation of our democracy is at stake. So this is quite a serious issue, one that deserves a clear look.