Here are some links to very good examples of what is wrong with the temperature data as presented by NCDC, GISS, and Hadley (and others). Anyone who does an average, homogenize, slice and splice, and similar data adjustment.

First off, a very nice article about why slice and splice (as taken to extremes in B.E.S.T.) is a Very Bad Idea. I’ve talked of this as ‘splicing is bad in data series’, but without much other than a nod to my Chem Teachers as to why. This explains it rather well:

http://stephenrasey.com/2012/08/cut-away-the-signal-analyze-the-noise/

I ran into it in comments on this posting at WUWT:

http://wattsupwiththat.com/2015/01/29/best-practices-increase-uncertainty-levels-in-their-climate-data/

Stephen Rasey January 29, 2015 at 12:54 pm All told, BEST’s uncertainty levels are a complete mess.

The breakpoint process is a complete mess, too.

They have reduced the temperature record to analyzing the noise after throwing away the signal.

There is also a link in there to this article by Willis of similar inclination:

http://wattsupwiththat.com/2014/06/28/problems-with-the-scalpel-method/

followed by this comment:

Rud Istvan January 29, 2015 at 1:49 pm Dedekind’s prior post, to which Willis was trying to get BEST to respond, was about ‘scalpeling’. Technically it is called Menne stitching in homogenization algorithms. And the inherent warming bias Dedekind explained to WUWT has been confirmed for actual stations. See Zhang et.al. Effect of data homogenization…in Theor. Appl. Climatol. 115: 365-373 (2014. Results from the anchoring on most recent data.

Also referenced are these two PDFs that look useful:

Pat Frank January 29, 2015 at 7:42 pm Edit

Mike M, all the groups working on global surface air temperature — UEA/UKMet, BEST, GISS — all assume that sensor measurement error is random and averages away. The assumption is promiscuous and entirely unjustifiable. Nevertheless, the Central Limit Theorem is assumed to apply throughout, and they clutch to it in a death grip. They ignore systematic measurement error entirely, and until recently never even mentioned it in their papers. Available calibration experiments show that temperature sensor systematic error is large and persistent. Solar loading has the greatest impact. I’ve published on this problem here (870 KB pdf) and here (1 MB pdf). From systematic sensor measurement error alone, the uncertainty in the 20th century global surface air temperature record is about (+/-)0.5 C. If they ever admitted to the systematic error, obviously present, they’d end up with nothing to report. The prime evidence base of AGW would vanish. One can understand the reluctance, but it’s incompetent science regardless. I’ve corresponded with Phil Brohan and John Kennedy at UK Met about the papers (Phil contacted me). They can’t refute the work, but have chosen to ignore it. Apparently likewise, everyone else in the field, too.

The two “here”s are:

http://meteo.lcd.lu/globalwarming/Frank/uncertainty_in%20global_average_temperature_2010.pdf

http://multi-science.metapress.com/content/t8x847248t411126/fulltext.pdf

Not had time to read them yet, but a quick scan looks pretty darned good.

Subscribe to feed