Meet the election oracles.

They foresaw the scene above; they predicted Donald Trump’s rise. They’re mostly professors, although two pollsters also got it right, and one media personality (Michael Moore). Some are economics researchers.

The lofty, slick, and respected forecasting sites – The New York Times’ UpShot, Nate Silver’s FiveThirtyEight – got it wrong, overwhelmingly prognosticating that Hillary Clinton would win the election (although Silver was closer than others). UpShot thought Clinton had an 85% chance, and most of the pundits and pollsters generally agreed.

As they headed to the polls on election day, voters faced a slew of news stories predicting a Clinton win based on polls that ended up being wrong. The reasons for the bad polling are unclear, but range from voters hesitant to tell pollsters they liked Trump to pollsters under sampling the non college-educated crowd that broke for Trump in rust-belt states. You can read more of the reasons here.

However, not everyone got it wrong. Some professors and pollsters proved to be election day oracles. Many of them report being ridiculed, insulted, and labeled as crazy – until the actual results came in, that is.

Here’s what you need to know:

Helmut Norpoth

Primary Model.com predicted the election right. Helmut Norpoth, a Stonybrook University political science professor, wrote before the election, “It is 87% to 99% certain that Donald Trump will win the presidential election on November 8, 2016; 87% if running against Hillary Clinton, 99% if against Bernie Sanders.” The model looks at primary polling but also the election cycle.

Norpoth has predicted the last five presidential elections right. He was so certain of the predictive powers of his model that he announced its results months before the election. Indeed, “the Primary Model predicted on March 7, 2016 that Trump would defeat Hillary Clinton with 87 percent certainty,” wrote Norpoth.

How does it work? Norpoth doesn’t use polls. He uses primaries. “As the name indicates, the Primary Model relies on presidential primaries as a predictor of the vote in the general election; it also makes use of a swing of the electoral pendulum that is useful for forecasting,” he explains. You can read more here.

Norpoth wrote, “What favors the GOP in 2016 as well, no matter if Trump is the nominee or any other Republican, is the cycle of presidential elections. After two terms of Democrat Barack Obama in the White House the electoral pendulum is poised to swing to the GOP this year. ”

Norpoth added, “In a match-up between the Republican primary winner and each of the Democratic contenders, Donald Trump is predicted to defeat Hillary Clinton by 52.5% to 47.5% of the two-party vote. He would defeat Bernie Sanders by 57.7% to 42.3%.”

Norpoth’s model “correctly predicted the results of every election except for one in the last 104 years,” says The Daily Caller.

Norpoth told the conservative news site Breitbart that a lot of people thought he was ridiculous and crazy for his prediction before it ended up correct, and said he saw bias in polling, “I think the polls just totally misjudged the potential and the kind of support that he (Trump) engendered, and he just fell through the cracks of how they poll people,” he said. “Any time I looked at a poll, at some of the fine print about the breakdowns to see what they were weighting, I always saw a very heavy Democratic preponderance, which I thought was way off, even bigger than in 2012.”

Allan J. Lichtman

According to the New York Times, Lichtman, an American University historian, helped create “a historically based model” that predicted Trump would win back in September. The Times says the model relies on “13 true-or-false statements” that measure the strength of the incumbent party. Six or more false statement predict a change.

You can see the list of questions here.

Lichtman now thinks Trump will be impeached because the ruling GOP will prefer VP Mike Pence.

Jacob Montgomery & Florian Hollenbach (Vox)

At 4 a.m. on election day, Vox predicted that a generic Republican candidate would win the race, but then decided that Trump was too different of a Republican to win, something the site called a “Trump Tax.” The differential was based on what was seen in the polls.

In other words, Vox’ model got it right, but Vox didn’t go with what its own numbers were saying. The wisdom of the Vox model was that it used a weighted average of academic models, instead of polls.

Vox noted, “Vox’s model, developed by Washington University in St. Louis’s Jacob Montgomery and Texas A&M’s Florian Hollenbach, is a weighted average of six academic forecasting models. Three of those models have Trump ahead, and three have Clinton, but the one with the best past track record, (Alan) Abramowitz’s, predicts a Trump win.”

Montgomery’s university website describes him as “an Assistant Professor in the Department of Political Science at Washington University in St. Louis. His research is in the areas of political methodology and American politics, with a special interest in political parties. He teaches courses on statistical methods and American political parties.”

Hollenbach’s website describes him as “an assistant professor at the Department of Political Science at Texas A&M University. I received my PhD in political science from Duke University. During the 2014/2015 academic year I was a fellow at the Niehaus Center for Globalization and Governance at Princeton University. My research focuses on the political economy of taxation and redistribution, specifically the development of fiscal capacity and taxation in authoritarian regimes.”

Alan Abramowitz

He was right overall but also wrong in a way, but Abramowitz’ model picked up on Trump’s edge when others did not. Abramowitz, a political science professor at Emory College, saw Trump’s strength when others didn’t, but his model studies the popular vote, and he predicted Trump would win the popular vote. Clinton ended up winning the popular vote, while Trump won the electoral college.

He wrote in August, “The Time for Change forecasting model has correctly predicted the winner of the national popular vote in every presidential election since 1988. This model is based on three predictors — the incumbent president’s approval rating at midyear (late June or early July) in the Gallup Poll, the growth rate of real GDP in the second quarter of the election year, and whether the incumbent president’s party has held the White House for one term or more than one term.”

Still, the professor predicted the victor right.

He concluded, “The Time for Change Model predicts a narrow victory for Donald Trump — 51.4% of the major party vote to 48.6%.” He then opined, though, that Clinton would probably win anyway because Trump was such a non-traditional candidate that he would lose Republican support.

Professor Ray C. Fair

His forecasting is pretty hard to interpret for a person not familiar with macroeconomics, but he predicted Clinton would get 44% of the final vote, with a Trump victory. You can read how he got there here. Fair is an economics professor at Yale University.

His predictions are based on macroeconomic modeling.

Arie Kapteyn

The Los Angeles Times notes that the USC/Los Angeles Times poll was constantly the outlier that showed Trump winning. Its creator was a man named Arie Kapteyn. The Times noted that Democrats and some in the media had strongly criticized Kapetyn for his polling methods – until the election, that is.

Kapteyn is a professor who researches economics at USC.

The poll, in the end, showed Trump winning by 3% and was generally about 6% higher than most polls all along. The poll was attacked for its complex system that weighted its sample of voters. The New York Times wrote a detailed article that questioned the poll’s approach; it focuses on panelists who are repeatedly reinterviewed, rather than drawing from new samples.

Trafalgar Group’s Robert Cahaly

The death of political polling. In the 53 October polls in WI, MI and PA, TWO had Trump ahead. (Both by @trfgrp!) pic.twitter.com/PBboqXJAIm — Mark Elliott (@markmobility) November 10, 2016

Trafalgar Group is a Republican leaning poll that suspected people were lying to pollsters about whether they supported Trump. So the pollsters started also asking people who they thought their neighbors would vote for and determined the numbers were different.

Few people had heard of this poll before the election, and it earned a “C” ranking from FiveThirtyEight. Lifezette describes how Trafalgar was ridiculed by other pundits.

The firm, which is based out of Atlanta, adjusted its numbers to account for people’s hesitance to admit a Trump vote and predicted that Trump would win Pennsylvania and Michigan as well as the electoral college. The Washington Times further describes the firm’s method, describing Trafalgar as a “quirky” firm of just seven. The firm “discovered during comparison polling that some Trump voters would not disclose how they planned to fill out their ballots…So the company used robotic calls for which Trump voters seemed more comfortable. They also added a ‘neighbor’ question, figuring that a respondent would be more willing to answer truthfully if a neighbor was voting for Trump. They also created a demographic of people who had not voted in a half-dozen years or so but planned to vote for Mr. Trump.”

The poll did get some other states wrong.

IBD/TIPP

Final IBD tracker: Trump 45, Clinton 43: "Clinton vs. Trump: IBD/TIPP Presidential Election Tracking Poll" https://t.co/7M15u6d3hZ — Jason Miller (@JasonMillerinDC) November 8, 2016

Of the 10 most recent polls in the Huffington Post database, this one came closest. It predicted right before the election that Clinton was up by 1, the closest in national polls.

Republicans were considerably more interested this time than four years ago, the pollster said. The poll queries respondents about their enthusiasm, and then factors this into the results. Town Hall has a thorough discussion of exactly how the poll is conducted.

The poll had Trump up 1.6% in a four-way race on election day. Investors Business Daily noted, “Not one other national poll had Trump winning in four-way polls. In fact, they all had Clinton winning by 3 or more points.”

Michael Moore

Michael Moore predicted that Trump would win the election, when most other pundits and pollsters were saying no way.

Moore had written, “And if you believe Hillary Clinton is going to beat Trump with facts and smarts and logic, then you obviously missed the past year of 56 primaries and caucuses where 16 Republican candidates tried that and every kitchen sink they could throw at Trump and nothing could stop his juggernaut.” You can read his entire essay here. He even called it that the rust-belt would flip Trump, calling it the Rust Belt Brexit.

He explained that he predicted Trump’s win because he is “Trump’s demographic,” explaining to PJ Media, “I’m an angry white guy over the age of 35 and I have just a high school education.” He wasn’t even surprised by Trump winning Michigan (although that race still hasn’t been called.) He added that Bernie winning Michigan and Wisconsin was the red flag.

Moore is now predicting that Trump won’t last four years.

State Polls That Got it Right

Using the RealClearPolitics polling database, here are polls that got it right in the days right before the election. The Emerson and Gravis polls had a pretty good track record also in some states:

Florida: Remington Research and Trafalgar Group had Trump winning Florida, but their margins were overstated.

Ohio: All of the recent polls predicted a Trump victory, but none hit the margin right.

Pennsylania: Trafalgar Group nailed the Trump Pennsylvania victory (saying he would win by 1; he won by 1.2%). Harper had the race a tie.



Michigan: Trafalgar was the only pollster predicting a Michigan win, although it overstated his margin.



New Hampshire: Clinton won by 0.2%. Emerson said she would win by 1. Boston Globe/Suffolk and UMass Lowell/7 News predicted a tie.

North Carolina: Trafalgar Group and WRAL-TV/Survey USA predicted Trump’s victory but overstated it.

Nevada: Gravis pretty much nailed Nevada.

Wisconsin: No one had Wisconsin right.

Iowa: The Des Moines Register was closest, but even pollsters who correctly predicted Trump would win Iowa tended to underestimate the degree of support for him.

Virginia: Lots of pollsters got Virginia right. The closest: PPP, Christopher Newport University, and Gravis.

Maine: Emerson was closest, but every poll overestimated Clinton’s support in Maine.

Colorado: Most pollsters predicted Clinton would win Colorado. Emerson was closest.

Arizona: Pollsters correctly predicted Arizona. Emerson was closest.



New Mexico: Pollsters predicted Clinton would win New Mexico, but most understated her support. Gravis hit it almost dead on.