Rabbit Run:

I read your op-ed with considerable interest. I’m a retired engineer whose work experience included several years in satellite design. As I read your article, my impression was that you do not understand the so-called “satellite temperature” data developed by Roy Spencer and John Christy of UAH. Allow me to provide some information.

Now this is very popular on the SKS list of denial as the El Nino driven SURGE is pushing global temperatures through the roof. Certain folk, including Congressman Smith, invoke the UAH MSU global temperature record as their gold standard. Yet anybunny looking into the matter knows of the serial screwups and the teeth pulling needed to get any information about the majic Spencer and Christy use to transform microwave intensity to temperatures and how it is hard to figure out what and where is actually being measured.

Atmospheric satellite data, considered by many to be the most objective, has clearly showed no warming for the past two decades. This fact is well documented, but has been embarrassing for an administration determined to push through costly environmental regulations.

NOAA often fails to consider all available data in its determinations and climate change reports to the public. A recent study by NOAA, published in the journal Science, made “adjustments” to historical temperature records and NOAA trumpeted the findings as refuting the nearly two-decade pause in global warming. The study’s authors claimed these adjustments were supposedly based on new data and new methodology. But the study failed to include satellite data.

The news has been full of Lamar Smith, Chair and Poohba of the House Science Committee fulminating about NOAA and his attempts to gangplank Tom Karl. In a recent op-ed in the Washington Times (fishrap whose time and sugar daddy has come and gone) Smith writes

The MSU series of instruments and the later AMSU measure microwave intensity from orbit, that is, from the top of the atmosphere. Theoretical work has been developed to support the claim that these measurements for each channel of the instrument correspond to a “bulk” temperature profile thru the atmosphere. When Spencer and Christy presented their first effort in 1990 (1), they worked with data from channel 2, which they still produce, (now labeled TMT for Temperature, Middle Troposphere).

However, in 1992 (2), they presented results which showed that the channel 2 data is distorted by emissions from the stratosphere, which has exhibited a well known cooling trend. For this reason, they proposed a modification of the channel 2 data, (now labeled TLT for Temperature, Lower Troposphere) which they claimed removed the distortion from the stratosphere in the MSU data.

The TLT computation begins with the 11 scan positions which the MSU produces for each swath across the ground track below. There are 11 positions, labeled 1 thru 11, with #6 being straight down (nadir). There are also 2 more positions at the ends of each swath, one viewing deep space and the other viewing a heated target which is monitored for temperature with two accurate resistance thermometers. The TLT algorithm actually includes only 4 of the 11 positions, throwing out 5, 6, and 7 and using 1, 2, 10 and 11 as a correction for the data from 3, 4, 8 and 9. Thus, the resulting TLT data can not be said to “ provide “complete global coverage”. Also, the data can only be provided between 82.5N and 82.5S, due to the inclination of the orbit. Spencer and Christy calculate a gridded data product including higher latitudes, which they calculate by interpolation, artificially extending beyond the range of available data.

The TLT algorithm is based on theoretical calculations, using a model of the microwave emission and adsorption at each pressure altitude added together from the surface to satellite altitude. Spencer and Christy have never publicly revealed the method they used to create their algorithm, which is rather curious, as the assumptions used may be critical. Some of the microwave energy in channel 2 comes from the Earth’s surface and the TLT computation adds more surface effects, thus the TLT is not a pure measure of temperature. As the MSU instruments are retired, newer AMSU instruments are replacing them and Spencer and Christy have created a different algorithm in order to include the AMSU data into the TLT. They claim that they are simulating the TLT from the MSU, again without specifying the method used to do so. They have continued this lack of transparency with the latest TLT (version 6), which Spencer briefly described on his blog, but which has not been published after peer review.

The important point to remember from all of this is that the TMT is not useful for measuring climate change and the TLT is highly theoretical. In spite of being aware of these limits, Spencer and Christy have presented the TMT in testimony to Congress, showing a comparison between the TMT and the results of computer simulations, both globally and over the tropics. What they don’t mention is that to produce their graphic, they have simulated the orbital altitude TMT measurements from the GCM results (3), using CMIP5 data from the KNMI Climate Explorer website (4). The model results from KNMI are monthly averages and include only temperatures at 3 pressure levels, the surface, 500mb and 200mb pressure height, as I understand it. The method to translate those monthly values into simulated TMT results remains an unpublished mystery.

Spencer and Christy’s claim (which you repeated ) that the satellite data does not exhibit as much warming as that from the surface is not surprising. The 13 satellites’ orbits take the instruments across each latitude at the same time of day with each orbit, the equator crossing times being nearly constant. The surface temperature record is usually an average of the temperature at a location, computed as an average of the daily low and high temperatures. This average will not be the same as the temperature measured at a fixed times of the day, say 10AM and 10PM, which the satellite might see over mid-latitudes. And, at the highest latitudes, each pass provides measurements half way between the equatorial crossing times, 3AM at one pole and 3PM at the opposite pole. At polar latitudes, the orbits overlap, giving multiple measurements during the day, which are summed into a grid box, while in mid latitudes, there are missed areas between the ground swaths, which exacerbates the lack of coverage in the TLT.

Twelve years ago, my curiosity led me to perform an analysis of the UAH TLT data, the results of which I published in a peer reviewed journal in 2003 (5). I found an apparent discrepancy at high latitudes of the Southern Hemisphere, which I suggested might be due to the effects of sea-ice. After my report, the group at Remote Sensing Systems (RSS) decided to exclude any coverage to the south of latitude 70S from their version of the TLT, their reasoning being that the high elevations over the Antarctic was distorting the measurements. RSS also excludes data from other regions with high elevations, such as the Andes and the Himalayas. I later performed an analysis using the TMT product, finding that these data did not exhibit the anomalous characteristic which I noticed in the TLT. These results have not been published, but can be made available on request. It would be of interest to see the result of a similar analysis using the latest version 6 of the TLT, though I am not likely to perform such an effort.

In conclusion, I think these facts provide very good reasons to discount the “satellite temperature” data when assessing the climate change resulting from mankind’s activities adding CO2 to the atmosphere.

Best Regards,

Richard Eric Swanson, AAAS, AGU

References:

1. Spencer, R. W., J. R. Christy, Precise monitoring of global temperature trends from satellites, Science 247, 1558 (1990).

2. Spencer, R.W., J. R. Christy, Precision and radiosonde validation of satellite gridpoint temperature anomalies, Part II: A Tropospheric retrieval and trends during 1979-90., J. Climate 5, 858 (1992b).

3. http://www.drroyspencer.com/2013/06/epic-fail-73-climate-models-vs-observations-for-tropical-t ropospheric-temperature/

4. http://climexp.knmi.nl/selectfield_cmip5.cgi?id=someone@somewhere

5. Swanson, R. E., Evidence of possible sea-ice influence on Microwave Sounding Unit tropospheric temperature trends in polar regions, Geophysical Research Let., doi:10.1029/2003GL017938, (2003)

——————————————————–

Eli asked for and received permission to publish this letter and also got some additional comments in the return Email. The Rabett had asked about some documentation S&C had provided, housed at NOAA

———————————————–

As usual, I thought of some additions, such as a mention of the fact that the early satellites exhibited a drift in equator crossing time as well as orbital decay, both of which result in the need for corrections to the time series. And, as you know, there were several other problems found over the years as well, which further complicate the MSU/AMSU products.

I had previously seen some version of the MLT document in your link. That document deals only with the processing of the data, which is quite convoluted. However, there’s no discussion of the derivation of the actual algorithm used to convert the data from individual MSU and AMSU scans into a single value for the TLT. Of course, S&C still fail to mention the impact of surface emissions, hydrometers and rain fall on their time series. I looked around and was reminded of 3 papers by Prabhakara, et al in Climatic Change from 1995 and 1996 on these issues. There were other reports as well, which raise questions regarding the validity of the TLT. I think it’s rather damning that Christy used the TMT in his committee presentation on 13 May this year. He appears to be completely ignoring the contamination due to stratospheric cooling.