At NASA’s Climate 365, there is an interesting story posted with this statement and a graph:

Some say scientists can’t agree on Earth’s temperature changes

Each year, four international science institutions compile temperature data from thousands of stations around the world and make independent judgments about whether the year was warmer or cooler than average. “The official records vary slightly because of subtle differences in the way we analyze the data,” said Reto Ruedy, climate scientist at NASA’s Goddard Institute for Space Studies. “But they also agree extraordinarily well.” All four records show peaks and valleys in sync with each other. All show rapid warming in the past few decades. All show the last decade has been the warmest on record.

In sync? Weellll, not quite. Japan apparently hasn’t ‘got their mind right‘ yet as the graph shows:

Here is where it gets interesting. Note the purple line after the year 2000.

The Japanese data line in purple is about .25 degree cooler than the NASA, NOAA, and Met Office data sets after the year 2000. That has partially to do with anomaly baselines chosen by the different agencies, as these two comparison graphs shown below illustrate:

Source: http://ds.data.jma.go.jp/tcc/tcc/news/press_20120202.pdf

NASA GISS uses a 1951-1980 average for the anomaly baseline, Japan’s Meteorological agency uses a 1981-2010 baseline, and that explains the offset difference between 0.48 and ~ 0.23 C, however, it doesn’t explain the divergence when all of the data is plotted together using the same anomaly 1951-1980 baseline as NASA did, which is explained in more detail at the link provided in the NASA 365 post to NASA’s Earth Observatory study here:

Source: http://earthobservatory.nasa.gov/IOTD/view.php?id=80167

In that EO story they explain:

The map at the top depicts temperature anomalies, or changes, by region in 2012; it does not show absolute temperature. Reds and blues show how much warmer or cooler each area was in 2012 compared to an averaged base period from 1951–1980. For more explanation of how the analysis works, read World of Change: Global Temperatures.

The justification for using the outdated 1951-1980 baseline is humorous, bold mine:

The data set begins in 1880 because observations did not have sufficient global coverage prior to that time. The period of 1951-1980 was chosen largely because the U.S. National Weather Service uses a three-decade period to define “normal” or average temperature. The GISS temperature analysis effort began around 1980, so the most recent 30 years was 1951-1980. It is also a period when many of today’s adults grew up, so it is a common reference that many people can remember.

So, the choice seems to be more about feeling than hard science, kind of like the time when Jim Hansen and his sponsor Senator Tim Wirth turned off the air conditioning in the Senate hearing room in June 1988 (to make it feel hotter) when they first tried to sell the global warming issue:

But, back to the issue at hand. The baseline difference doesn’t explain the divergence.

Perhaps it has to do with all of the adjustments NOAA and GISS make, perhaps it is a difference in methodology in computing the global surface average and then the anomaly post 2000. Perhaps it has to do with sea surface temperature, which Japan’s Met agency is very big on, but does differently. A hint comes in this process explanation seen here:

http://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/explanation.html

Global Average Surface Temperature Anomalies JMA estimates global temperature anomalies using data combined not only over land but also over ocean areas. The land part of the combined data for the period before 2000 consists of GHCN (Global Historical Climatology Network) information provided by NCDC (the U.S.A.’s National Climatic Data Center), while that for the period after 2001 consists of CLIMAT messages archived at JMA. The oceanic part of the combined data consists of JMA’s own long-term sea surface temperature analysis data, known as COBE-SST (see the articles in TCC News No.1 and this report). The procedure for estimating the global mean temperature anomaly is outlined below. 1) An average is obtained for monthly-mean temperature anomalies against the 1971-2000 baseline over land in each 5° x 5° grid box worldwide. 2) An average is obtained for monthly mean sea surface temperature anomalies against the 1971-2000 baseline in each 5° x 5° grid box worldwide in which at least one in-situ observation exists. 3) An average is obtained for the values in 1) and 2) according to the land-to-ocean ratio for each grid box. 4) Monthly mean global temperature anomaly is obtained by averaging the anomalies of all the grid boxes weighted with the area of the grid box. 5) Annual and seasonal mean global temperature anomalies are obtained by averaging monthly-mean global temperature anomalies. 6) The baseline period is adjusted to 1981-2010.

Note what I highlighted in red:

…for the period after 2001 consists of CLIMAT messages archived at JMA

That along with:

The oceanic part of the combined data consists of JMA’s own long-term sea surface temperature analysis data, known as COBE-SST

Is very telling, because it suggests that Japan is using an entirely different method for both land and sea data. For the post 2001 land data, it suggests they use the CLIMAT data as is, rather than the “value added” processing that NCDC/NOAA and NASA GISS do. The Met Office gets the NCDC/NOAA data already pre-processed with the GHCN3 algorithms. NASA GISS deconstructs the data then applies their own set of sausage factory adjustments, which is why their anomaly is often the highest of all the data sets.

Prior to 2001, Japans Met Agency uses the GHCN data, which is pre-processed and adjusted through another sausage recipe pioneered by Dr. Thomas Peterson at NCDC.

The land part of the combined data for the period before 2000 consists of GHCN (Global Historical Climatology Network) information provided by NCDC

A good example of the GHCN sausage is Darwin, Australia, as analysed by Willis Eschenbach:

Above: GHCN homogeneity adjustments to Darwin Airport combined record

So, it appears that Japan’s Meteorological agency is using adjusted GHCN data up to the year 2000, and from 2001 they are using the CLIMAT report data as is, without adjustments. To me, this clearly explains the divergence when you look at the NASA plot magnified and note when the divergence starts. The annotation marks in magenta are mine:

If anyone ever needed the clearest example ever of how NOAA and NASA’s post facto adjustments to the surface temperature record increase the temperature, this is it.

Now, does anyone want to bet that the activist scientists at NOAA/NCDC (Peterson) and NASA (Hansen) start lobbying Japan to change their methodology to be like theirs?

After all, the scientists in Japan “need to get their mind right” if they are going to be able to claim “scientists agree on Earth’s temperature changes”, when right now they clearly don’t.

P.S.

BTW if anyone wants to analyze the Japanese data, here is the source for it:

http://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/map/download.html

It is gridded, and I don’t have software handy at the moment to work with gridded data, but some other readers might.

UPDATE: Tim Channon at Tallbloke’s has plotted the gridded data and offers a graph, see here: http://tallbloke.wordpress.com/2013/02/01/jmas-global-surface-temperature-gridded-first-look/

Share this: Print

Email

Twitter

Facebook

Pinterest

LinkedIn

Reddit



Like this: Like Loading...