Guest essay by John Davies *

Being able to evaluate the evidence behind any scientific claim is important. Being able to recognize bad science reporting or faults in scientific studies is equally important. These following points may help you separate the real science from the pseudo science.

Speculative Language

Speculations from any research are just that – speculations.

Look out for ambiguous, obfuscatary or weasel words & phrases such as …

– can, clearly, could, conjectured, considered, expected, may, might, perhaps, possibly, projected, robust, unprecedented

– “Experts suggest…” “It has been said that …” “Research has shown…” “Science indicates …”

“It can be argued…” “Scientists believe….” “A high level of certainty” “Models predict….” etc,

…as any real evidence, for the conclusions being claimed is doubtful.

Sensational Language & Headlines

The media will ‘Never let facts spoil a good story’

Words like – Unprecedented, unparalleled, unmatched, extraordinary, groundbreaking, phenomenal, apocalyptic, bizarre, cataclysmic, catastrophic, devastating, extreme.

Phrases like – ‘Since records began’, ‘The majority of scientists concur’ ‘Never on such a scale’: are used to convey a message, not necessarily the truth or facts, they rely on the reader having a short memory or being too lazy to check. ‘Unprecedented’; now often means…not within the last 9 months !!

Headlines of articles are regularly designed (with no regard to accuracy) to entice readers into reading the article.

At best they oversimplify the findings, at worst they sensationalise and misrepresent them.

E.g. – ‘Margarine makes mayhem in Maine !!’

Correlation & causation

Be wary of confusion by assuming that correlation equals causation.

See some entertaining examples – http://tinyurl.com/oqhw24g – 6 min.

Correlation between 2 variables doesn’t automatically mean one causes the other; there could be many other causes.

E.g. – Divorce rate in Maine has a 99% correlation with the consumption of margarine. http://tinyurl.com/qb4n9mf

(So is eating margarine, the cause or result of divorce ?…or are there other reasons ???)

Misinterpreted results

News articles often distort or misinterpret the findings for the sake of a good story, intentionally or otherwise.

If possible try to read the original research paper; rather than relying on ‘quotes’ from a news article (by a pressurised hack journalist on a deadline, who is trying to build a story to fit the catchy headline), roughly based on a poor press release.

‘Cherry-picked’ results

This involves selecting bits of data which support the conclusion, whilst ignoring those that do not.

Trend lines plucked from the middle of a graph may not show the real picture, you need to see the full graph to compare. If a paper draws conclusions from just a selection of its results, it may be cherry-picking.

Data Presentation

Check the start & finish points in every data set to pick up any cherry-picking. Look at the X Y scales on graphs, is one truncated to show a distorted result ? A neat often used trick is to just show the anomaly, so a small amount looks enormous.

Beware of graphs that suddenly go exponential, Are the results out of normal range ?? Look for the error bars, If there are no error bars, ask why ??

Graphs & statistics can help summarize data; but are also often used to lead people to make incorrect conclusions. This video shows a few of the many ways people can be misled with statistics and graphs.

-13 mins –

Journals and citations

Research published to major journals should have undergone a review process, but can still be flawed, so should be evaluated with this in mind. Similarly large numbers of citations do not necessarily indicate that research is good quality or highly regarded.

Un-replicable results

Results should always be replicated by independent research and tested over a wide range of conditions. Extraordinary claims require extraordinary evidence. You always need more than one independent study.

If it can’t be reproduced or the full data & methodology is not made available, then it’s probably another example of junk pseudo science.

Peer-review

The peer-review process** is supposed to be one of the cornerstones of quality, integrity and reproducibility in science & research. Peer Review does not mean the conclusion is correct.

It only means that it was reviewed by similar people for obvious errors.

Judging by the number of peer-reviewed papers that have had to be withdrawn in the last few years, the system clearly isn’t working any more: http://tinyurl.com/lahsgrl http://tinyurl.com/pwbsvzx

A scientist / journalist shares his story of 2 sting operations on the scientific publishing process with frightening results. http://www.sciencemag.org/content/342/6154/60.full & http://tinyurl.com/pweth63

“Of the 255 papers that underwent the entire editing process to acceptance or rejection, about 60% of the final decisions occurred with no sign of peer review. Of the 106 journals that discernibly performed any review, 70% ultimately accepted the paper. Most reviews focused exclusively on the paper’s layout, formatting, and language. Only 36 of the 304 submissions generated review comments recognizing any of the paper’s scientific problems and 16 of those papers were still accepted by the editors despite the damning reviews.”

It can be argued that the peer-review process has actually worked against reproducibility in research.

Desk top ‘peer-review’ has replaced reproducibility as the standard of good research.

Having your research pass a peer review is what gives researchers the moral license to say things like this.-

“Even if WMO [the World Meteorological Organisation] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.” – Prof Phil Jones UEA 2005.

This cuts to the heart of the matter. Science must be falsifiable: otherwise it’s not science. Those who seek to find something wrong with your data are the first people who should have access to it, not the last.

Challenging, refining and improving other people’s work is the means by which science proceeds.

It’s not science until it has been reproduced several times over.

No matter HOW good the figures look, or HOW smart everyone thinks you are, or HOW pretty your graphs are – if you can’t reproduce the results on demand, it isn’t scientific – it’s just hinting in that direction. If a model is unable to predict direct observations, then the parameters, variables, or basic theoretical concept must be wrong. You should change the model ….NOT the observed data. The motto of the Royal Society of Great Britain is: nullius in verba – take nobody’s word for it. Never take any thing at face value.

Science is based on provable facts not blind belief;

Question everything.

But remember, the worlds most threatening words are – How, What, Who, Why.

John Davies is a retired engineer with interests in engineering, physics, history, power supply & transmission, steam engines & over the last few years the climate.

*Inspired by an original idea of Andy Brunning at Compound Interest http://www.compoundchem.com/wp-content/uploads/2014/04/Spotting-Bad-Science.png

** Peer-review alternatives http://tinyurl.com/pbdykgj & more importantly http://tinyurl.com/6souaom

Share this: Print

Email

Twitter

Facebook

Pinterest

LinkedIn

Reddit



Like this: Like Loading...