Submitted by Salil Mehta via Statistical Ideas blog,

Congratulations to all fellow Americans who voted today! It's a great blessing to share these past several months with you and this evening see -at last- the conclusion of much hard work, through the communal participation in our duty to vote for our future.

It would be imperative to reflect on exactly how the pollsters fared in their varying opinions throughout this campaigning season so that we have a better understanding of how to interpret their craft the next election, which comes again, soon enough. No need to be naïvely distressed (as we see in the market rejoinder), since there was no special "tail-risk" event on Election Day (a landslide in either direction), hence the most important aspect of the strange polling forecasts this year will also be the wildly pernickety results in the wrong direction from some pollsters with sick long-term history of same. As counseled in my previous research last month (see recent article here), generating over 3 million reads, this year’s pollsters gave an unanimously appalling performance, unworthy of our times.

Unrepresentative of the mood of the country. Impotent in transmitting from survey design, to candidate's probability. Most exposed human bias and lack of mathematical confidence; also vigorously feeding into the spineless narrative from mainstream media who have subtly advantaged Hillary Clinton whenever possible. And specific pollsters overtly managed to distort an otherwise scientific process and renovate it into wild entertainment that was too extreme to ever consider worthwhile during the recent month. We always knew Donald Trump's chances to be president were nearly a coin-flip (far more likely than pollsters had it), and we held firm despite the inane mania combined with liberal-complacency oft-times engulfing us. The chart below will help explain the concluding (and bogus) picture we have from 22 national pollsters, going into the elections.

We show how each pollsters saw a schizophrenic, and many times uncorrelated progression in their estimate of the spread that Hillary or Trump (shown below as a negative spread) had throughout the year. The most historic polls in the chart below are dated in May 2016, and are shown as a small, grey marker with a modeling weight of 14%. The current polling data on the other extreme, are shown as a large red marker, when available, with a modeling weight of 100%. The probability mathematics here then ensues from this exponential, and actuarially double-decrement weighting. Only 40% of the pollsters, at any point this season, saw Trump winning! Why were the other 60% so statistically awful, relative to their advertised margin of error? Even the most conservative leaning pollsters (e.g., IBD, Rasmussen, and Gravis) seemed generally hedged in their final poll estimate going into Election Day, or slightly inclined towards Hillary (though again the margin of error is doubtlessly anywhere between 5-10% in the ultimate analysis).

It is imperative to note that the most conservative leaning pollsters this year (and this applies as well in previous elections) should be congratulated for being the least irresolute in their polling strategy and predictive results (each one oscillating their polling estimate ~2% throughout the season, and very tight amongst on another). At times IBD or Rasmussen were lambasted for being alone in showing Trump in a close race, but when the FBI reopened the e-mail probe into Hillary’s e-mails, suddenly in recent days they looked like virtuosi among other “discouraged” pollsters. Discouraged because they were once again wrong, or depressed because the heavily favored now suddenly waning Hillary?