James Hansen took his degree in astrophysics, not normally a climate-related field. Nonetheless, few would argue that he is not an expert on climate change.

On the other hand, Freeman Dyson is possibly the second smartest person on the planet, a theoretical physicist who worked in the field of climate science for 15 years. And yet, because he does not support the consensus, climate activists dismiss him as unqualified.

How do we estimate the expertise of someone in a field where we ourselves are not expert?

This is a current events question, given the recent publication of ‘Consensus on Consensus: A Synthesis of Consensus Estimates on Human-Caused Global Warming,’ written by (among others) John Cook, Naomi Oreskes, Stefan Lewandowsky and William Anderegg, all authors of papers much criticized here.

The point of their paper is simple: The work of some of the co-authors of the paper were cricitized by Richard Tol. The thrust of his criticism is that many studies of climate consensus eliminate large amounts of data considered unqualified by the researchers. Tol writes, “Cook et al (2013) estimate the fraction of published papers that argue, explicitly or implicitly, that most of the recent global warming is human-made. They find a consensus rate of 96%–98%. Other studies6 find different numbers, ranging from 47% in Bray and von Storch (2007) to 100% in Oreskes (2004)—if papers or experts that do not take a position are excluded, as in Cook et al.If included, Cook et al find a consensus rate of 33%–63%. Other studies range from 40% in Bray and von Storch (2007) to 96% in (Carlton et al 2015). Cook et al use the whole sample. Other studies find substantial variation between subsamples. Doran and Zimmerman (2009), for instance, find 82% for the whole sample, while the consensus in subsamples ranges from 47% to 97%. Verheggen et al (2014) find 66% for the whole sample, with subsample consensus ranging from 7% to 79%.”

This most recent paper by those for whom the Tol belled is an attempt to justify their decisions. Their reasoning is simple. If you eliminate the non-experts from the total being surveyed, the experts will agree with you.

In the Supplementary Information to their paper they write, “We define domain experts as scientists who have published peer-reviewed research in that domain, in this case, climate science.” (Despite this, they eliminate many peer-reviewed respondents in Verheggen et al, for example.)

As I mentioned the other day, a simple publication count is a remarkably weak way of estimating expertise. I wrote, “The weaknesses of publication records are:

1. Very capable younger scientists have not had time to establish a record of publications. Dismissing their opinions leads to loss of useful information.

2. As ‘alarmists’ like to point out whenever an older scientist expresses a skeptical viewpoint, at some point in the natural cycle of a person’s career, ongoing education becomes less important. One can make the case that someone reaching the end of their career actually knows less than a freshly minted scientist.

3. The tools and techniques used in tertiary education are different than they were when many older scientists were educated. In addition, new knowledge is incorporated into texts available to younger scientists. This again may advantage the young at the expense of the old.

4. Some scientists are co-authors of numerous papers for reasons other than their ability to contribute to the main body of the scientific arguments advanced in the paper. Their publication count may be more impressive than their actual command of the field.

5. Some very good scientists work outside the academic world and publication may not be a priority for them. Using publications as a proxy for expertise again may devalue their opinions.”

When I made those points to another of the paper’s co-authors (Bart Verheggen), he agreed but basically said it was the only way he could think of.

While Lewandowsky, Cook and Oreskes are not climate scientists, it seems that none of the team involved with the paper thought to look at how others evaluate expertise. It didn’t occur to them that there is a body of work that could have informed their paper. As many of the co-authors were in fact authors of papers cited in the most recent work, it really seems as if they missed the boat.

This surprises me a little, given the frequency with which they throw around the term ‘Dunning Kruger Effect,’ which describes the tendency of individuals to overestimate their own knowledge or abilities. It’s part of the field of expertise evaluation, yet that name is the only thing that seems to have stuck.

Expert recommendations are an oft-used technique to identify those with expertise. People refer those they think are experts and if enough of them do it,they are awarded the title.

Expertise is a highly relevant topic in the field of law, where my expert goes against your expert in the courtroom, and establishing who’s better is pretty important. It’s also important in discovery, especially with the new game of ‘e-discovery’, the evaluation of mountains of documents using software to sort it. Again, this is a well-researched topic ignored by Cook et al.

It’s relevant to military decision making, high technology research, and in academia.

In academia,a publication count is considered the crudest method of evaluating expertise, mostly for the reasons I cited above. More common are techniques such as citation measuring (how many times your work has been referenced by others) or impact measurement (the perceived quality of the journals where you are published, often combined with publication counts and citation counts).

I mentioned in my previous post that few of the co-authors have expertise in climate science. Fewer have experience in surveys. None appear to have relevant expertise in evaluating expertise.

They did not utilize the methods most commonly used and most trusted in academia. Worse, they do not appear to have consulted the large body of literature on the subject. There are no references to the appropriate literature in their paper.

They…just did a pub count and called it a day.

There’s no doubt they desperately need to defend their 97% claim of consensus. Going after Exxon and threatening to put skeptics in jail requires that high a level of confidence.

But you would think that if they were going to defend it, they would do a better job.

Most of the original papers referenced in their latest effort are remarkably weak. It seems they didn’t learn from experience.