A much-cited paper detailing the online spread of fake news has been retracted – because it turned out to be fake news.

In an embarrassing turn of events, in early January the prestigious journal Nature Human Behaviour issued a retraction notice for the study, which was published in June, 2017.

The paper was the work of a team of researchers headed by Xiaoyan Qiu from China’s Shanghai Institute of Technology. It described a model approach that examined if the quality of information played a role in whether social media posts became popular.

The aim, the researchers stated, was to try to “explain the viral spread of low-quality information, such as the digital misinformation that threatens our democracy”.

The paper found that even though individuals may prefer to read and share “quality information”, factors such as “information overload and limited attention” contributed to “a degradation of the market’s discriminative power”.

In other words, Qiu and colleagues concluded, quality material and the rate at which it spreads across the internet “reveals a weak correlation”. Low quality material – fake news, complete rubbish – is just as likely to go viral as the good stuff.{%recommended 8423%}

It was an elegantly deduced finding, resting on detailed computational analysis. It was also – arguably – a conclusion that resonated with the zeitgeist of the times, in which “fake news” was a demon invoked by people on all sides of politics.

Not surprisingly, it attracted significant attention, and the researchers found their work quoted in many major media outlets.

All of which turns out in retrospect to have been a bit unfortunate, the academic integrity site RetractionWatch reports.

The formal retraction of the paper by the journal’s editors, the site reports, was done at the request of the authors, who became aware that a software bug and flawed data combined to produce incorrect results.

The bug, the journal editors explain in the retraction notice, led to “an incorrect value of the discriminative power represented” in one of the paper’s graphs.

Recognising that their primary conclusions had emerged from the accidental use of the wrong dataset, the researchers re-ran numbers using the proper set – and watched as their findings collapsed.

They found that high quality memes tended to spread far more often and more broadly than low quality ones – the exact opposite of the published results.

“Thus,” the retraction notice concludes, with commendable understatement, “the original conclusion, that the model predicts that low-quality information is just as likely to go viral as high-quality information, is not supported.”