The 97% Cook Consensus – when will Environ Res Letters retract it?

Richard Tol has an excellent summary of the state of the 97% claim by John Cook et al, published in The Australian today.

It becomes exhausting to just list the errors.

Don’t ask how bad a paper has to be to get retracted. Ask how bad it has to be to get published.

As Tol explains, the Cook et al paper used an unrepresentative sample, can’t be replicated, and leaves out many useful papers. The study was done by biased observers who disagreed with each other a third of the time, and disagree with the authors of those papers nearly two-thirds of the time. About 75% of the papers in the study were irrelevant in the first place, with nothing to say about the subject matter. Technically, we could call them “padding”. Cook himself has admitted data quality is low. He refused to release all his data, and even threatened legal action to hide it. (The university claimed it would breach a confidentiality agreement. But in reality, there was no agreement to breach.) As it happens, the data ended up being public anyhow. Tol refers to an “alleged hacker” but, my understanding is that no hack took place, and the “secret” data, that shouldn’t have been a secret, was left on an unguarded server. The word is “incompetence”, and the phrase is “on every level”.

The hidden timestamps of raters revealed one person rated 675 abstracts in 72 hours, with much care and lots of rigor, I’m sure. It also showed that the same people collected data, analyzed results, collected more data, changed their classification system, and went on to collect even more data. This is a hopelessly unscientific process prone to subjective bias and breaches the most basic rules of experimental design. Tol found the observations changed with each round, so the changes were affecting the experiment. Normal scientists put forward a hypothesis, design an experiment, run it, and then analyze. When scientists juggle these steps, the results influence the testing. It’s a process someone might use if they wanted to tweak the experiment to get a specific outcome. We can’t know the motivations of researchers, but there is a reason good scientists don’t use this process.

My problem with taking the Cook paper seriously is that it is so wholly, profoundly, unscientific from beginning to end that it’s hard to muster any mental effort to unpack a pointless study that will never tell us anything about the atmosphere on Earth.

As I have said from the start, studies on consensus are a proxy for funding, not a proxy for Truth — and funding is as monopolistic as ever. The government gives grants to researchers to find a crisis, and we get what we paid for. If we pour $30 billion into finding reasons to fear CO2, and $0 into finding holes with that theory, it is entirely predictable that we will get 90+ percent of papers that support the theory. There are plenty of ways to write irrelevant, flawed, unrelated, or repetitive material. (What’s remarkable is that there are so many skeptical papers that manage to get written without much funding and get past the gatekeepers in “peer review”.)

But many harried, busy people, untrained in logic, seem to find these consensus papers compelling, so it is worth pointing out the flaws.

The most important issue here is not the inept study authors (who are beyond help) but the response of the University of Queensland, and the editors of Environ. Res. Lett.. Richard Tol has informed the journal of the problems and suggested his reply should be published and the paper should be retracted. Editor Daniel Kammen chose not to publish Tol’s analysis, though he sent it to reviewers. Peer review has become so farcical, one ERL reviewer suggested Tol should rewrite his submission and should conclude that Cook’s paper was an example of “exemplary scientific conduct”. That says a lot about scientific standards at ERL.

Don’t ask how bad a paper has to be to get retracted. Ask how bad it has to be to get published.

Why will ERL publish such a flawed paper, not publish the scientific response to it, and not retract something unscientific and incompetent from beginning to end? Daniel Kammen needs to explain why Cook’s paper is useful science.

Richard Tol‘s blog: Occasional thoughts on all sorts.

“Global warming consensus claim doesn’t stand up”

An edited version appeared in the Australian on March 24, 2015

Consensus has no place in science. Academics agree on lots of things, but that does not make them true. Even so, agreement that climate change is real and human-caused does not tell us anything about how the risks of climate change weigh against the risks of climate policy. But in our age of pseudo-Enlightenment, having 97% of researchers on your side is a powerful rhetoric for marginalizing political opponents. All politics ends in failure, however. Chances are the opposition will gain power well before the climate problem is solved. Polarization works in the short run, but is counterproductive in the long run.

The Cook paper is remarkable for its quality, though. Cook and colleagues studied some 12,000 papers, but did not check whether their sample is representative for the scientific literature. It isn’t. Their conclusions are about the papers they happened to look at, rather than about the literature. Attempts to replicate their sample failed: A number of papers that should have been analysed were not, for no apparent reason.

The sample was padded with irrelevant papers. An article about TV coverage on global warming was taken as evidence for global warming. In fact, about three-quarters of the papers counted as endorsements had nothing to say about the subject matter.

Cook enlisted a small group of environmental activists to rate the claims made by the selected papers. Cook claims that the ratings were done independently, but the raters freely discussed their work. There are systematic differences between the raters. Reading the same abstracts, the raters reached remarkably different conclusions – and some raters all too often erred in the same direction. Cook’s hand-picked raters disagreed what a paper was about 33% of the time. In 63% of cases, they disagreed about the message of a paper with the authors of that paper.

The paper’s reviewers did not pick up on these things. The editor even praised the authors for the “excellent data quality” even though neither he nor the referees had had the opportunity to check the data. Then again, that same editor thinks that climate change is like the rise of Nazi Germany. Two years after publication, Cook admitted that data quality is indeed low.

Requests for the data were met with evasion and foot-dragging, a clear breach of the publisher’s policy on validation and reproduction, yet defended by an editorial board member of the journal as “exemplary scientific conduct”.

Cook hoped to hold back some data, but his internet security is on par with his statistical skills, and the alleged hacker was not intimidated by the University of Queensland’s legal threats. Cook’s employer argued that releasing rater identities would violate a confidentiality agreement. That agreement does not exist.

Richard Tols full post is here.

The Australian article is here.

VN:F [1.9.22_1171]

please wait... Rating: 9.3/10 (135 votes cast)