Since Trump won, as is to be expected after a shock election result, the blame stick has been hurled far and wide, hitting everyone from white women to black women to poor people to white men to hispanics to the musical Hamilton and the polling guys like Nate Silver.

Silver especially is targeted because his model predicted Clinton had a 71% chance of victory. Other models in the New York Times and Huffington Post had Clinton’s chance much stronger. It has led to fights and debates over who was right or more precisely, least wrong.

Oddly this argument over the models had begun before the election day with Silver and Huffington Post’s Ryan Griff having it out over the relative worth of their models. There was a lot of back and forth, and a lot of people tweeting about it.

Seriously, who gives a damn?

I’m not blaming Silver or any poll aggregators for the result – that would be silly. But as they now occupy a major part of the media, we should be honest: poll aggregation has become everything it hated and is actually worse for coverage of elections and politics that what it replaced.

Don't blame the media: Trumpland is a place where truth doesn't matter | Katharine Murphy Read more

Poll aggregators in the US and in Australia sprung up on blogs in the mid 2000s as a response to the then stupid reporting of polls.

Chief political writers would report on two percentage point shifts in Newspoll as massive news one week and a fortnight later a shift back of two percentage points as separate massive news. All the shifts were explained due to things the journalists knew to be true because they knew more than we did about what really was going on.

The poll aggregators instead realised that individual polls were erratic, and that most of the commentary was about statistical noise rather than actual shifts in voters’ preferences. They instead bundled (or aggregated) all the polls to come out with a less noisy number.

And at the time I thought their analysis a welcome addition that changed the coverage for the better. Journalists became more statistically aware and now always mention margins of error and talk of the trend.

But the overarching problem of the poll reportage was not that it was inaccurate but that it occurred at all.

Reporting of polls automatically converts the coverage into a horse-race. That coverage is a massive media failure and poll aggregators greatly increase the failure.

Now I love data, and I don’t pretend that what I do is somehow beyond criticism – as Katharine Murphy has written, the election result should have all in the media realising their work “is making little or no difference” – but polls are not economic or social data.

They don’t record how many sales or prices or even estimate employment. They record the feelings of people about what they might do in a week, month, year’s time.

But they do convey a sense of fact – they are quantitative, and thus they tempt you into thinking they are more true than a qualitative report. Numbers don’t lie!

And so you get seduced.

I was guilty of it. Even throughout this year while I was writing against them and arguing with friends about how pointless the polls coverage was, I would check the 538 website to see what the percentages were. It’s tough to resist.

And the problem is this love of data has made the horse race coverage seem legitimate “hard news” (numbers are facts!), and as a result, the performance of polling has come to define the performance of election coverage

This is not merely a US phenomenon. We saw it here in Australia after the election when media outlets tried to explain “why they got it wrong”.

But polling coverage is not hard news, it is entertainment disguised as hard news.

Many polling aggregators like Silver originally used their statistical knowledge to analyse sport. And such data crunching is perfect for sport.

Sport statistics involve things that have actually occurred – runs scored, tackles, or turnovers conceded – rather than opinions (no one analyses how many runs a team says it will make in a month’s time). It allows sports supporters to play the role of coach, because in effect such data analysis gives a reader greater understanding of the coach’s “policies”.

But political polls don’t do that. Rather than coaches, they transform us into campaign managers, but managers insidiously removed from reality.

We look at polling maps and data and think, “Oh the poll numbers are weak in Pennsylvania and Wisconsin, but looking good for Hispanics aged under 30”.

But what the hell does that mean? What does a weak poll in Wisconsin tell us? Does it mean a better health policy needed? Immigration reform? A different candidate?

It’s great for conversations over a beer, but awful for covering an election.

Political polling coverage is actually a step removed in a way that analysis of sporting statistics is not. We see this in the analysis suggesting Clinton ignored Wisconsin – she didn’t hold one rally there; and those loudly decrying that she only started going to Michigan in the last couple weeks.

Immediately we go into a campaign-manager mindset.

It traps us into thinking about the “ground game”, the advertising buys, the rally numbers and locations of the candidate on various days – as though that actually matters to the voters, and as if reading about such things actually helps voters make a decision.

We think like a campaign manager; but one with no interest in policies or voters other than as numbers on a map.

It was bad enough when you had the chief political writer interpreting the polling data like Moses come down from the mount, but it is even worse when you have whole teams devoted to the polls.

For the numbers and hype about the models are all geared towards convincing us (with appropriate caveats to save themselves afterwards) that their numbers are more factual and the analysis behind it more trustworthy than that done by the old rube churning out copy about the latest 2% point movement in Newspoll.

Look at the graphs, look at the arrows on the dial! That’s not opinion; it’s data at work!

Yes, the election polls were wrong. Here's why | Mona Chalabi Read more

We fall victim to it. It is almost impossible to resist. We want to believe that the data will save us and we’ll even ignore that the data is just a lot of feelings if that helps us to believe.

Like investors in crappy mortgage-backed security derivatives that banks convinced people were trustworthy because the risk was in effect aggregated, we forgot that polls aggregation does not make the polls any more factual, and certainly no more important.

And in the end, it is not accuracy but importance that truly matters.

The problem is not whether Silver’s – or anyone else’s – polling model was wrong, the problem is that we think it is important whether or not they were.