Scientists predicted in the 1980s that a key fingerprint of anthropogenic climate change would be found in the ocean. If they were correct that increases in greenhouse gases were changing how much heat was coming into the system, then the component with the biggest heat capacity, the oceans, is where most of that heat would end up.

We have now had almost two decades of attempts to characterize this change, but the path to confirming those predictions has been anything but smooth…

Predictions

At least as far back as Hansen et al (1988), scientists realised the importance of characterizing the changes in ocean heat content as the key determinant of external driving of climate change as opposed to internal variability.







The key realisation was that for a very low sensitivity planet, there is not much adjustment required to ocean temperatures, and so that can happen quickly without a significant heat imbalance for very long. For a higher sensitivity planet, more changes in ocean temperatures are needed, and so more heat needs to go into the ocean and that requires a longer period with a significant imbalance. Tracking the imbalance therefore gives you leverage in figuring out sensitivity.

First attempts

The first global analysis of ocean heat content changes was a massive undertaking that needed to use a huge amount of the ocean ship data that was only just becoming available. It wasn’t achieved until Levitus et al. (2000) and covered the period 1950-1999. An update a few years later giving trends to 2003 confirmed the picture (Levitus et al, 2005):







But while these preliminary analyses did suggest that the oceans were warming (with a net heat input of a few tenths W/m2 averaged over the whole planet), there was a lot of quasi-decadal variability – more in fact, than people were expecting (for instance, the large excursion between 1970 and 1990). For a number of years this was a puzzle, because while models reproduced the long term trends, they didn’t match the decadal variance

(Barnett et al, 2005).

As with most model-observation discrepancies, people looked in multiple places for a resolution. Sampling was one obvious issue: was the sparse data, particularly in the early years and in the Southern Ocean, artificially increasing the variance in the data? That was looked at by AchtuaRao et al (2006), who found that yes, that was a factor, but no, it didn’t make up the whole difference. Others looked into whether the ocean models could be improved.

Corroboration and confusion

Meanwhile, other estimates of the OHC trend came from newer technologies, including the Argo floats (from ~2000 onward), and satellite altimetry (Willis et al, 2004) giving a 1993-2003 trend of 0.60 ± 0.10 W/m2, which did match the model trends for the more recent period (Hansen et al., 2005).





Figure 2. Hansen et al (2005)



However, as the Argo float network came to dominate the dataset (from 2003 or so), more oddities arose. For instance, it was reported that the oceans had dramatically cooled between 2003 and 2005 by Lyman and colleagues, only for the trends to be reversed a few months later once a pressure sensor error in some of the floats was revealed and corrected.

Other scientists turned up yet another problem, and that was with the XBT data that made up a varying part of the ocean data going back decades (Gouretski and Koltermann, 2007). Many of these “eXpendable Bathy-Thermographs” needed a new correction to their fall speeds so that their data was correctly set to the real depth where they were measuring.

With these corrections in hand, Levitus and colleagues updated their analysis in 2009:







Other groups also put together estimates of the changes, using different methodologies, different corrections and screenings of the data, and came up with (initially) quite different trends (though all showed the long term warming) i.e. Domingues et al. (2008) and Ishii and Kimoto (2009). Additionally, Lyman et al (2010) presented a new analysis using more of the Argo data:





Figure 4. Lyman et al (2010) compared to the same models in Figure 2.



Data availability and quality control kept increasing and that allowed Levitus et al (2012) to update their dataset again:





Figure 5. Levitus et al (2012).



It will come as no surprise that all of this back and forth as data sets were improved and issues found and resolved, that there has been plenty of fodder for contrarians. From being too credulous when analyses appeared to show cooling, and too eager to accept that model-observation discrepancies are always the fault of the models, they often didn’t appreciate that actual science can be messy and is full of stops and starts as new ground is broken.

Synthesis

In recent years, the updates to ocean heat content have become routine, operationally available on a quarterly or annual basis via NODC, PMEL, CSIRO, and others like Cheng et al. (2015) etc. In the IPCC AR5 (2013) (Ch 3, Box 3.1) the rate of ocean heat uptake was estimated to be 257 TW for 1993–2010 equivalent to a mean global heat flux of 0.5 W/m2.

Additionally, these data are updated and summarized for the annual BAMS State of the Climate (the latest edition coming out in 2018 covering the trends through 2017):





Figure 6. Johnson et al (2018) in BAMS SOTC.



For reference, the SOTC estimates (1993-2017) for the full depth of the ocean, which range from 0.62 to 0.79 W/m2, are already 25% to 60% bigger than was assessed in the IPCC AR5 report.

Independent confirmation?

Recently, a totally independent geochemical method based on the changing solubility of O 2 and CO 2 in the warming ocean (Resplandy et al, 2018) came up with a trend that was comparable to the rates in Fig. 6. Unfortunately, there were a couple of errors in the published analysis, but the corrected trends, 0.76 ± 0.45 W/m2, are nicely in line with the latest instrument-based numbers. The uncertainties in this methodology are too large for this paper to be definitive independent confirmation but, now that this method has been tried out, further work may well reduce them.

Synthesis

As a topic, this key measure of climate change has had more than it’s fair share of false starts, corrections, revisions and updates (including from me). The complexity of wrestling with data streams that were not designed to track this issue until recently, combined with the intense interest in the answer which has attracted substantial independent work and scrutiny, has now however produced an overall synthesis that is clear.

So what does this look like?





Figure 7. Ocean heat uptake estimates over the years (in W/m2 averaged over the whole surface). Values are for 0-2000m or 0-3000m where available, with the exception of the Willis et al (2004) and Lyman et al (2010) estimates which are 0-700m. Uncertainties are 1 standard error. PMEL/MRI/NCEI/CHEN trends are taken from Resplandy et al.



As was stated in the IPCC report:

It is virtually certain that the upper ocean (0–700 m) warmed from 1971–2010. […] Deeper in the ocean, it is likely that the waters from 700–2000 m have warmed on average between 1957 and 2009 […]

The evidence suggests that ocean heat uptake has accelerated over the last couple of decades, and is likely higher than was reported in IPCC in 2013. The Resplandy et al estimates are consistent with the newer data.

Going back to where we started, what does this allow to conclude about recent climate change? First, the size of the uptake implies that there indeed must be a significant energy imbalance for the planet. This in turn means there must be a larger radiative forcing driving this (for instance, due to the rise in greenhouse gases or an increase in solar activity – other evidence allows us to distinguish between these). But most importantly, this was a predicted effect, made in the earliest (and most basic) simulations as a consequence of a non-negligible climate sensitivity and greenhouse gas increases. In all the hubbub surrounding the latest paper, one would do well to remember that.

References