After Trump’s dramatic win, political polling in America may never be the same.

Going into Election Day, most of the major national polling organizations found Hillary Clinton ahead by 3-4 points. Many predicted that she would win handily. Some said it could be a landslide.

Even polling guru Nate Silver was flat wrong. He’d angered liberals with a prediction that Trump would win Florida, which was correct. But a day before the election, he boldly asserted that Clinton would sweep Florida, Ohio and North Carolina – when just the opposite occurred.

In the weeks ahead, many questions will be raised about American polling practices. But will pollsters and the mainstream media organizations that so often support them admit that they were biased against Trump from the start – and skewed their polling accordingly?

For example, why did pollsters use a 2012 “turnout” model to weight their polling samples knowing full well that Democratic turnout was likely to be lower – and GOP turn-out higher?

That likely voter model, which also ignored voters who had never previously participated in an election, skewed the survey sample toward Clinton. And pollsters knew it

Furthermore, why did so many pollsters doubt that some Trump supporters were concealing their support for him out of fear of being labeled “racist.”?

Some analysts had the gall to accuse the Trump campaign of spreading a “false” theory about a Trump “undercount.”

But that theory didn’t originate with the Trump campaign. It started with a remarkable New York Times article by Thomas Edsall, one of the deans of American political journalism, and nearly every media and polling expert ignored it

Edsall compared the results of polls using live pollsters and those conducted online and found that respondents in online polls consistently showed much higher levels of support for Trump. He speculated that online voters felt less constrained by “political correctness” and could show their true preferences.

The size of the undercount he uncovered was huge – about 8 points.

In an article published in late August, I repeated Edsall’s test and found a similar albeit smaller Trump “undercount” of 4 points. And yet few media outlets took these claims seriously.

The undercount may have affected various voter groups, but none more profoundly than white college educated voters

For months nearly every major poll found that Trump was losing badly among this voter group. Past Republican presidential candidates had won a majority of college-educated voters. Trump was clearly “underperforming,” analysts said.

Consider, for example, a national poll sponsored by the Pew Charitable trusts in August. The article announcing the poll was entitled, “Educational Divide in Vote Preferences on Track to be Wider than in Recent Elections.”

Pew found that college-educated voters favored Clinton by a whopping 23 points, 52%-29%. By contrast, Trump only had a marginal advantage among voters without a college degree, about 5 points, 41% to 36%.

The implication? Trump’s vaunted Rust Belt strategy focused on working class voters in states like Pennsylvania, Michigan and Ohio was unlikely to pan out. Furthermore, he was likely to get beaten badly by Clinton in suburban counties in these same swing states, Pew implied.

But it didn’t happen that way. In exit polls released by ABC News yesterday, Trump beat Clinton among non-college-educated voters by 8 points, 3 points higher than what the Pew poll had predicted.

But among college educated voters the skew was even larger. Trump lost by just 9 points – or 14 points less than what Pew had predicted.

Overall, that’s a 17-point unpredicted swing in the polling by educational level toward Trump.

What happens when you include race? In the Pew survey, Clinton was expected to win White college graduates by 14 points. But according to exit polls, Trump actually won this voter group by 4 points, another 18-point unpredicted swing.

And if you add gender, the polling skew is even worse. Among White male college graduates Trump trounced her by 15 points, a more than 20-point swing from the Pew prediction.

Ironically, it was Pew in 2015 that sponsored a widely-regarded study of polling practices in which the group concluded that online polling during elections was likely to be far more accurate than live-polling.

But when it came to applying that insight to analyzing Trump’s polling support, Pew ignored its own past findings. So did all the media polling groups that conducted live polling and that for much of the campaign found Clinton in the lead.

A lead that, quite possibly, in retrospect, never actually existed.

One organization did get the election right. The LA Times/USC poll, which surveyed its respondents online, consistently found Trump ahead, almost from the start. How did the mainstream media react? They accused the polling organization of “bias.”

Polling, by definition, is an imprecise science. However, it doesn’t help when most pollsters aren’t even pretending to practice real science.