...and the Real Climate guys instantly endorse them...



This is pretty hilarious. For years, we would hear from certain "researchers" that one needs to be just a climate scientist (a string theorist with A* from all physics subjects in the grad school was never good enough) to be taken seriously by the climate establishment. However, Stefan Rahmstorf of the Real Climate just wrote a text called



where he promotes a new paper in the Quaterly Journal of the Royal Meteorological Society titledby Kevin Cowtan of York and Robert Way of Ottawa.The climate alarmists didn't like the people's increasing knowledge of the "global warming hiatus" – about the 350th proof that they're pseudoscientists – so they needed a paper to "explain" the hiatus away. You could think that the saviors are climate scientists.But if you click at the names above, you will learn that Cowtan is an expert in programming software that paints protein molecules while geography student Robert Way is a cryosphere kid who likes to play with the permafrost and a long-time assistant to kook John Cook at the doubly ironically named "Skeptical Science" server. To make the story short, they decided that HadCRUT4 can't be perfect if it shows too little warming trend since 1997 or 1998. So a problem with HadCRUT4 has to be found out. They decided that the gaps in the reported temperature are the problem and this problem may be exploited.So they designed a new method to "guess" the missing temperature data at various weather stations and various moments by extracting some data from the satellites. They liked the result, it seemed to be able to calculate the missing figures – and especially because the warming trend since the late 1990s was raised by about 0.5 °C per century which means that it doubled or tripled relatively to the "statistical zero" reported by HadCRUT4.There are many reasons why I find it utterly insane for an alarmist to suddenly embrace such a development.Don't get me wrong. The missing data are a problem for the integrity of a weather-station-based temperature dataset. But it's also a problem that has been wrestled with. Richard Muller declared himself as the leader of the world's best and most neutral team of researchers, called BEST ( Berkeley Earth Surface Temperature ), who decided to solve, in the best mathematical way imaginable, exactly this problem how to fill the gaps.(I have played with those geographical gaps as well, e.g. after I found out that 30% of weather stations have seen a cooling trend in their record which is 80- years long in average, but I decided it would be better not to compete with Muller et al. because this work really needs several independent heads, some time, and funding to be done properly.)At the end, Muller et al. concluded that it made sense to present temperature from the first half of the 19th century and in the 20th century, they pretty much confirmed everything that HadCRUT and GISS were saying. Now, a molecule painter in York, England arrives with his cryosphere kid pal in Ottawa, Canada and they "show" that all these numbers were wrong.It's likely that Cowtan and Way have not made a "silly numerical mistake" although I haven't verified their code, of course. They have followed a logic that may superficially look OK and they got a result. The result made the warming look larger – by a coincidence – so this increased their desire and chance to publish the paper.I am convinced that the answer is No. A paradoxical feature of their conclusion is that they used the satellite data to increase the (small) warming trend seen at HadCRUT – even though the satellite data actually show a cooling trend since 1997 or 1998. That's ironic, you could say. Why wouldn't you prefer to use the satellites for the whole Earth, anyway?One must be extremely careful about splicing data from different datasets. It's very easy to fool yourself. It seems to me that they have no control over the error margins (especially various kinds of systematic errors) that they introduced by their hybridization method. They just produced a computer game – a software simulation – that looked OK to them but they made no really scientific evaluation of the accuracy and usefulness of their method. The error margins may very well be larger than if they only used one dataset.Moreover, filling gaps by using a completely independent source of the data could be good for the description of the local and temporary swings of the temperature but it's the worst thing you can do for the evaluation of the overall long-term, global trend, exactly because each splicing may introduce a new error from the relative calibration.Consider this example: the gaps may be mostly in the winter in recent years and mostly in the summer in older years (or the proportions may evolve, to say the least). So if you substitute the temperatures from a dataset with smaller variations (and that could be the case of the satellite data), it means that you will increase the recent data and decrease the older ones. You will therefore spuriously increase the warming trend (or change a cooling trend to a warming one). If you realize how large the winter-summer temperature differences are, you may see that the effect on the calculated trend is substantial even though the local, temporary temperature oscillations may be reconstructed rather nicely. I don't see any evidence that they have protected their calculations of the trend against these obvious problems. They seem to be completely naive about such issues.For many reasons sketched above, I don't believe that their methodology gives more accurate estimates of the trend of the global mean temperature than Muller's BEST, for example.But what I find remarkable is the weird sociological dimension of these "findings". After years in which everyone was told that the warming trend is known with certainty etc., it may easily change by 0.5 °C per century – even in the recent decades which should provide us with the most controllable raw data. Half a Celsius degree per century is almost the whole warming trend we like to associate with the 20th century! So if a computer graphics programmer and a cryosphere kid might change the figure by 150% overnight, and Germany's most important alarmist below 50 years instantly applauds them, someone else could surely change the trend in the opposite direction and the 20th century warming would be gone, right?The role of the censorship or artificial endorsement for similar papers – which is likely to be influenced by politics and predetermined goals – would become primary. It's no science.Just to be sure, I don't believe that the uncertainty concerning the 20th century warming trend is this high. The warming trend from 1901 through 2000, calculated by some linear regression from some hypothetical "exact global temperature data", is 0.7 °C plus minus 0.2 °C, I would say. Changing it by 0.5 °C is 2.5-sigma modification, a rather unlikely event. The probability that the number (0.7±0.2) °C is negative is equivalent to 3.5 sigma standard deviations – something like 1 in 3,000. It can't be "quite excluded" that the accurate warming trend was actually negative but it is very unlikely.While the willingness of Herr Rahmstorf to jump the shark knows no limits and his endorsement of this paper is a free ticket for the aid by a psychiatrist (if a warming tripler were said to be hidden inside, Rahmstorf and his soulmates wouldn't hesitate to eat a hamburger created out of feces, not even for a second), you should also be assured that all these (theoretically) radical changes of the climate record are still inconsequential for any questions in the practical life.The change of the trend by 0.5 °C per century is something you can't feel on your skin – not even if you wait for 100 years and carefully remember how you were feeling before you began to wait. ;-) And I don't have to explain that such temperature changes can't be dangerous – a temperature change may only be dangerous if it is at least 10 times larger than what you can feel. No chance that there is danger hiding here.