Progress has been made in recent years in understanding the observed past sea-level rise. As a result, process-based projections of future sea-level rise have become dramatically higher and are now closer to semi-empirical projections. However, process-based models still underestimate past sea-level rise, and they still project a smaller rise than semi-empirical models.

Sea-level projections were probably the most controversial aspect of the 4th IPCC report, published in 2007. As an author of the paleoclimate chapter, I was involved in some of the sea-level discussions during preparation of the report, but I was not part of the writing team for the projections. At the core of the controversy were the IPCC-projections which are based on process models (i.e. models that aim to simulate individual processes like thermal expansion or glacier melt). Many scientists felt that these models were not mature and understated the sea-level rise to be expected in future, and the IPCC report itself documented the fact that the models seriously underestimated past sea-level rise. (See our in-depth discussion published after the 4th IPCC report appeared.) That was confirmed again with the most recent data in Rahmstorf et al. 2012.



As a result of the IPCC-discussions, in 2006 I developed a complementary approach to estimating future sea-level rise and offered it to IPCC (but it was not used); this was published in Science in 2007 (and with over 300 citations to date it turned out to be the second-most-cited of the ~10,000 sea-level papers that were published since 2007). This “semi-empirical approach” linked the rate of global sea-level rise to global temperature in a simple physically motivated equation, calibrated with past data. It suggested that sea-level might rise about twice as much by 2100 AD as predicted by IPCC. My main conclusion was not that semi-empirical models are necessarily better, but that “the uncertainty in future sea-level rise is probably larger than previously estimated”. We will come back to this issue, i.e. the overall uncertainty across different model types and using all available information, in part 2 of this post.

Much higher projections than IPCC are also a consistent feature of more recent assessments published since 2007, e.g. the Antarctic Science Report, the Copenhagen Diagnosis, the Arctic Report of AMAP and the recent World Bank Report. Higher projections are also commonly used in coastal planning, e.g. in the Netherlands, in California and North Carolina, and included in the recommendations of the US Army Corps of Engineers. And last month NOAA published the following new sea-level scenarios for the US National Climate Assessment:



Fig. 1. Source: Global Sea Level Rise Scenarios for the United States National Climate Assessment, NOAA (2012)

The range intermediate-low to intermediate-high of 0.5-1.2 meters is almost the same as the range 0.5-1.4 meters of my 2007 Science paper.

This week an expert elicitation by Bamber and Aspinall was published in Nature Climate Change, which confirms that the body of expert opinion expects much higher sea level rise than the 4th IPCC report. The median contribution from ice sheets alone by 2100 was estimated as 29 cm, with a 95th percentile value of 84 cm. The paper compares a range for total sea-level rise for the RCP4.5 scenario of 33-132 cm based on their expert elicitation to our recent semi-empirical range (Schaeffer et al., Nature Climate Change 2012) of 64-121 cm.

Recent progress in understanding sea-level rise

Just before Christmas an overview of process-based sea-level estimates for the 20th Century was published by Gregory et al in Journal of Climate, a paper with many authors that presents a whole suite of estimates for individual sea-level contributions, partly data-based and partly model-based. The paper then looks at the sum of all these components, see Fig. 2.



Fig. 2: Comparison of timeseries of annual-mean global-mean sea-level rise from four analyses of tide-gauge data (lines) with the range of 144 synthetic timeseries (grey shading). Each of the synthetic timeseries is the sum of a different combination of thermal expansion, glacier, Greenland ice-sheet, groundwater and reservoir timeseries. Source: Gregory et al in Journal of Climate.

This diagram shows the range of sea-level histories that is obtained by combining all the single estimates in various combinations. It shows that the observed sea-level history can be obtained when combining the individual components in all possible combinations (144 in all), but the observations lie at the very edge of the range. The authors write:

We would judge that a given synthetic timeseries gave a satisfactory account of observed global mean sea-level rise if it lay within the uncertainty envelope for 90% of the time. Very few of the synthetic timeseries pass this test.

If we take the mid-point of the grey range, this shows that the central estimate of 20th Century sea-level rise, based on adding up all processes, is ~11 cm. The central estimate of observed rise is ~16 cm and ~40% larger.

The authors conclude that a residual trend is needed to make up for the discrepancy and argue that this must come from a long-term ice loss in Antarctica. They write:

If we interpret the residual trend as a long-term Antarctic contribution, an ongoing response to climate change over previous millennia, we may conclude that the budget can be satisfactorily closed.

I guess it depends on how easily one is satisfied. The estimated required residual trend is given as 0 – 0.2 mm/year, so it explains at most 2 cm of rise over the 20th Century and does not make up for the shortfall mentioned; as I understand it, it just increases the synthetic range enough to bring most observations into its 90% confidence interval, though still near its edge.

In any case the now higher sea-level estimates from process models for the 20th Century naturally also imply higher projections for the 21st. Here it is important to compare like with like – same emissions scenario, same time interval. E.g. for the A1B scenario over the interval 1990-2095, the 4th IPCC report’s central estimate is 34 cm while the semi-empirical estimate of Science in 2007 is 78 cm. While we do not want to enter discussions about the draft 5th IPCC report (this would be premature, given there will still be numerous changes), given that it is in the public domain now it is no secret that for this scenario it projects 59 cm – a whopping 73% increase over the 4th report and much closer to my 2007 semi-empirical estimate. However, the more recent semi-empirical models have tended to give higher projections, so there remains a substantial gap between these two modelling approaches.

Personally, for various reasons I expect that process-based estimates may well keep edging up in future as the models are improved, considering e.g. the recent Nature-Paper by Winkelmann et al. and the remaining tendency to underestimate past rise. I dearly hope, however, that the truth will turn out to be a lower rise than suggested by semi-empirical models, for the sake of all people who live near the sea or love the coast.

Is 20th C sea-level rise related to global warming?

The Gregory et al. paper was greeted with enthusiasm in “climate skeptics” circles, since it includes the peculiar sentence:

The implication of our closure of the budget is that a relationship between global climate change and the rate of global-mean sea-level rise is weak or absent in the past.

The abstract culminates in a similar phrase, which can easily be misunderstood as meaning that global warming has not contributed to sea-level rise. That is wrong of course, and the claimed closure of the sea-level budget in this paper is only possible because increasing temperatures are taken into account as the prime driver of 20th Century sea-level rise.

When read in full context, the true meaning of the statement becomes clear: it is intended to discredit semi-empirical sea-level modelling. That is both fallacious and odd, given that the paper does not even contain any examination of the link between global temperature and the rate of global sea-level rise which is at the core of semi-empirical models, and which has been thoroughly examined in a whole suite of papers (e.g. Rahmstorf et al. 2011). Instead, it dismisses semi-empirical models offhand based on two arguments.

The first is that individual contributions to the sea-level budget do not show a clear link to global temperature. That is simply a fundamental misunderstanding of the semi-empirical approach: its principal idea is to cut through the uncertainty and complexity surrounding the time evolution of the individual components by considering only the overall sea-level rise and its link to global temperature. The usefulness of this idea is based on the following factors:

1. More accurate data. We may assume that the observational data for the time evolution of global sea-level rise are much more accurate than those for any individual component. A particular irony is that the glacier melt component of “process models” is in fact estimated by a semi-empirical equation quite similar to the one we use for sea level, but poorly validated since data are available only for ~350 of the world’s ~ 200,000 glaciers. Thus results from questionable semi-empirical modelling are used to dismiss rather better-validated semi-empirical modelling.

2. Partial cancellation of regional climate variability. It is obvious that e.g. the Greenland ice sheet responds to local and not global temperature, so it is not surprising that the Greenland component alone shows little relation to global temperature in the past. However, the ice sheet contributions come from both polar areas, the mountain glacier components from global land masses across a range of latitudes (with a mid- to high-latitude bias), while thermal expansion is particularly sensitive to warming over low-to mid-latitude oceans since the thermal expansion coefficient is much larger there than in colder waters. This broad mix of different regions contributing to sea-level rise makes it likely that the total rise is more clearly linked to global-mean temperature than any single component.

3. Future dominance of global warming over natural regional variability. Since the global warming signal increases over time while the amplitude of natural climate variability does not (much), the effect of global warming on sea level will become more dominant in future, making it likely that semi-empirical models are an even better approximation in future than they were in the past.

The counter-argument that with progressive warming we run out of glacier ice is an artifact of the split between mountain glaciers and larger ice masses and does not apply if total sea level is considered. As shown in Rahmstorf et al. 2011, the argument vanishes if we consider all continental ice together as a continuum (see their Fig. 13), in which melting progressively affects the colder ice surfaces as climate heats up.

Has sea-level rise accelerated?

The second argument for dismissing semi-empirical models in Gregory et al. is that “acceleration of global-mean sea-level rise during the 20th Century [is] either insignificant or small”. That argument was also put forth by Houston and Dean (2011) (see our discussion of this paper), and in our published comment on this we showed why it is false (Rahmstorf and Vermeer 2011). The argument is based only on considering the acceleration factor from a quadratic fit, an almost meaningless statistic (see our tutorial explanation). In fact, if the rate of sea-level rise perfectly follows global-mean temperature, then such a small acceleration factor is exactly what one gets, due to the specific shape of the global temperature curve. Thus, a small quadratic acceleration factor in no way speaks against semi-empirical models, but rather is what one would find if the semi-empirical model were perfect. Frankly, I am quite surprised that the authors (ten of whom are also authors of the sea-level chapter of the upcoming IPCC report) display such unfamiliarity with the fundamentals of (and prejudice against) semi-empirical models.

As John Church phrased it right after the paper was published:

I would argue that there is an unhealthy focus on one single statistic — an acceleration number — and insufficient focus on the temporal history of sea level change.

That is well said – and the temporal histories of the Church&White sea-level data and global temperature match rather well, as the following graph shows.



Fig. 3: Rate of global sea-level rise based on the data of Church & White (2006), and global mean temperature data of GISS, both smoothed. The satellite-derived rate of sea-level rise of 3.2 ± 0.5 mm/yr is also shown. The strong similarity of these two curves is at the core of the semi-empirical models of sea-level rise. Graph adapted from Rahmstorf (2007).

If we do focus on the temporal history, we find that in all but one of the sea-level reconstructions shown in Gregory et al. (their Fig. 6) the most recent rate of rise is unprecedented since the start of the record, despite the curves ending already in 2000 and all below the more reliable satellite rate of 3.2 mm/year. Early in the 20th Century, all show rates around 1.5 mm/year. In addition there is good evidence for very low rates of SLR in centuries preceding the 20th (presented e.g. in the 4th IPCC report or more recently in Kemp et al. 2011).

The one curve that does not show an unprecedented recent rate in Gregory et al. is the data of Jevrejeva et al. (2008). That contrasts with our treatment of the same data in Rahmstorf et al. 2011 (Fig. 5), where we applied a stronger and more sophisticated smoothing (as compared to the running average used by Gregory et al) which lowers the temporary high peak in the rate around 1950. This peak is not found in any of the other data sets, and as shown in Fig. 2 above, it makes the Jevrejeva data run outside the grey range found by combining all contributions.

I think this peak is spurious and results from the fact that the data of Jevrejeva et al. cannot be considered an estimate of global-mean sea level on such relatively short time scales (a couple of decades). For example, in this data set the North Atlantic data (including Arctic and Mediterranean, overall 16.6% of the global ocean area) provide 31% of the global average and are weighted four times as strongly as the Indian Ocean, although the latter is larger (19.5% of the global ocean). The Northern Hemisphere is weighted more strongly than the Southern Hemisphere, although the latter has a greater ocean surface area. (For more on the Jevrejeva weighting scheme, see our reader’s exercise below.) That is not to say that other tide-gauge based estimates guarantee a properly area-weighted global sea-level history, but it means that Jevrejeva et al. are guaranteed to not represent an area-weighted global mean, while e.g. Church and White (2006, 2011) are making a decent attempt at representing a global mean.

All reconstructions to a different extent are affected by spurious variability that is not real variability in global sea level. The satellite data are least affected by this because they almost cover the entire global ocean (and they show a remarkably constant rate of sea-level rise since 1993). The spurious variability is bound to increase further back in time due to the fewer early tide gauges. It is bound to be much reduced by time-averaging, because in the longer run the effect of water “sloshing around” the global ocean under the influence of winds and currents (due to natural variability) will largely average out, given the restoring force of gravity. Hence, the further back in time one looks, the more time averaging is required to see a signal rather than noise, and the tide gauge data sets generally require much more averaging than the satellite data.

My bottom line: The rate of sea-level rise was very low in the centuries preceding the 20th, very likely well below 1 mm/yr in the longer run. In the 20th Century the rate increased, but not linearly due to the non-linear time evolution of global temperature. The diagnosis is complicated by spurious variability due to undersampling, but in all 20th C time series that attempt to properly area-average, the most recent rates of rise are the highest on record. At the end of the 20th and beginning of the 21st Century the rate had reached 3 mm/year, a rather reliable rate measured by satellites. This increase in the rate of sea-level rise is a logical consequence of global warming, since ice melts faster and heat penetrates faster into the oceans in a warmer climate.

Update 14 January: Today I received this 200-page report on sea-level rise by the National Research Councils of the National Academies in the post. The committee that wrote it derived their own sea-level rise projections which are highly consistent with those of NOAA and my 2007 Science paper: “Global sea level is projected to rise 8-23 cm (3-9 in) by 2030, relative to 2000 levels, 18-48 cm (7-19 in) by 2050, and 50–140 cm (20-55 in) by 2100.”

Continue to Part 2 of this post, with some thoughts on “cycles” of sea level rise, the maturity of process models and the IPCC process.

—

A reader’s exercise: the “virtual station method” of Jevrejeva et al.

A thorough, critical assessment of different climate data sets is of course not the job of blogs but of the IPCC, and its experts get several years to prepare this. Nevertheless, some insights can already be obtained by the sea-level amateur spending half an hour on the following exercise.

In the virtual station method the global ocean is subdivided into 13 ocean regions. The global mean sea level is computed as the arithmetic average over these 13 regions. As mentioned above, this does not provide an area-weighted global average, since e.g. the North Atlantic consists of four regions (western and eastern North Atlantic, Arctic and Mediterranean) while the entire Indian Ocean is just one region. The North Pacific is two regions and a half: the western and eastern North Pacific, and the Central Pacific which stretches across both hemispheres.

But now let us look at how the average sea level change within each region is computed. Take the example in the graph shown below, where the light-blue ocean region (think of it as eastern half of a northern hemisphere ocean basin) is covered by 7 tide gauges A-G. These are spaced about 1000 km apart – the exact distances are shown in the graph. Maybe you want to ponder first how you would average those stations to obtain a reasonable average over the light-blue ocean region!

The virtual station method averages the rise observed at these stations (over a given time interval) by application of the following simple set of rules:

1. Take the two stations closest to each other and average them.

2. Replace those two stations by a new “virtual station” which consists of the above average, located at the mid-point between the two stations.

3. Go back to step 1 with this new set of stations.

Repeat this until you are left with only one virtual station, which now is your regional average.

So here is our reader puzzle: with what weighting factors do the above 7 stations enter the final regional average?

Update 10 January: Our reader MARodger was the first to give the right answer:

A, G: 1/4

B, C, D: 1/8

E, F: 1/16

So the weights attached to these gauges differ by up to a factor of 4, apparently for no good reason. To be fair, the method was not designed to deal with regular spacing of tide gauges as in this idealised example, but with cases where spacing is highly irregular. I think it is a reasonable method e.g. to average a cluster of gauges into one number before averaging it with one far-away gauge. Nevertheless I think the example illustrates important limitations of this method, and the point that the weights by design must differ by factors of 2, 4, 8, 16… Maybe this post will inspire some mathematically-minded reader to propose a better scheme.

There is an additional issue in that the weights change over time, since the number of tide gauges available changes over time. That is another avenue by which spurious variability can enter the final average curve.

[With thanks to Svetlana Jevrejeva for answering my questions on the virtual station method.]

References