We have several estimates of global sea level based on tide gauge data, and I’d like to compare four of them. First is the best-known and probably most trusted, from Church & White, which I’ll call *CW*. Next is one I’ve heavily criticized in the past, from Jevrejeva et al., I’ll call it *Jev*. We also have a recent new approach from Dangendorf et al. which I’ll refer to as *Dang*. Last (and least) is my own reconstruction (*Fos*) using my method to align station records and account for VLM (vertical land movement).

I aligned them all to have the same baseline (from 1993 to mid-2010 so I could also put satellite data on the same baseline). First I’ll graph for you their yearly averages, and we notice right away that Jev and Fos start way back in 1807, while CW delays until 1880 and Dang doesn’t begin until 1900:

Here are the original monthly data series rather than yearly averages:

It’s hard not to notice that my own estimate tends to show much larger month-to-month fluctuations than the others, *especially* during the early years. Why the inflated noise level? Much of that impression is an illusion. As I’ve shown before, the Jevrejeva data aren’t really monthly estimates, they are 12-month running means of monthly estimates [because they aligned data with the *first difference method* applied to 12-month differences]. If we want to compare noise levels, we need to compare the Jev data to 12-month running means of the Fos data. Here you go:

It turns out the noise levels are nearly identical. The short-term fluctuations are also remarkably well in step. The main difference between them is in the trend. Here’s the *difference* between the Jev and Fos estimates:

Note the drop in the 1850s (there are others just as sudden but not so precipitous); it’s exactly the kind of erroneous shift caused by using the first-difference alignment method, which I discussed in this post. Here are the differences between each pair of data sets (using 12-month runnning means for Fos):

The strongest disagreement is between Jev and Fos, in part because they both include the earliest time period. But Jev disagrees with CW and Dang too, more than they do with each other, and more than they do with me. In fact, when it comes to trend my estimate and that from Dang show very little difference at all.

Acceleration is a change in speed, so: what is the speed, i.e. the *rate* of sea level rise? How has that speed changed over time? Has is changed at all? If it has — change in speed — that’s acceleration.

I fit a lowess smooth to all the data series, not to estimate their smoothed values as much as to estimate their *rates*. My program also estimates the uncertainty in that rate, which enables me to make pretty graphs showing not only how the rate of sea level rise seems to have changed over time, but the range within which it probably actually fell. Let’s start by comparing Jev to Fos, with Fos in red (with a pink band for the uncertainty range) and Jev in blue (with blue dashed lines for the uncertainty range):

According to the Jev data, the most striking acceleration — a truly dramatic change in speed — happened in the 1850s, exactly when I suspect the Jev data shows a shift due to their alignment method. In my opinion, the 1850s acceleration suggested by the Jev data is spurious.

Both data sets agree on some of the main features after the year 1900. They show acceleration to higher speed in the early 20th century, and a decline to slower speed (deceleration) shortly after mid-century. The main disagreement is for the early- to mid- 20th century, when Jev shows considerably higher sea level rise than Fos. As a result, the Jev data suggest it’s possible (not likely, but at least plausible) that sea level may have been rising as fast in the early 20th century as it is now.

We can do the same comparison with the other data sets, and here they all are (but without the uncertainties except for Fos, because with all of them the graph gets too crowded):

The Jev data are in the most disagreement with the others, especially regarding the early- to mid- 20th century sea level rise. Jev is also the only data set which fails to demonstrate modern sea level rise is faster than at any other time in this record.

As I’ve said before, in my opinion the Jev data are seriously flawed, which is why they show such unrealistic acceleration way back in the 1850s and why they are so out of step with all the others after 1900.

One thing which is *not* a matter of opinion is that all these data sets show acceleration. Their rates go both up and down in a complex pattern, but for all of them, modern times are dominated by increasing rates. The presence of acceleration — both throughout the record and in the most recent decades — is testified by every one of these data sets.

Consider the visually apparent acceleration after 1950. One way to test for it is to fit a quadratic to the time series post-1950, another is to use a linear spline. One must allow for autocorrelation, but after doing so the statistical significance is beyond any doubt. The trend isn’t just a straight line; whichever data set you choose, sea level rise has been speeding up lately.

Satellite data don’t begin until 1993 when we sent altimeters into orbit to measure the height of the sea surface.

We recently looked at the satellite data provided by NOAA/STAR; let’s see how it compares to the estimates from tide gauges. Here are monthly averages of NOAA/STAR satellite data as a solid black line, together with *yearly* averages for the data sets we’ve already looked at (using their yearly averages makes the graph a lot less cluttered, and you’ll still see how the trends compare):

If we look at just the satellite data, we can estimate the *average* rate of sea level rise during the post-1993 period by fitting a straight line (with, say, least squares regression). This doesn’t mean the actual trend is following a straight line, or that we endorse that idea, but it does give an estimate of the average rate over the last 26 years (about 2.9 mm/yr).

The graph “looks like” maybe it (the trend, not the fluctuations) didn’t just follow a straight line. Then again, it “looks like” maybe it did. A great way to improve “looks like” is to subtract our best-fit straight line from the data and study the residuals left over. If the trend (not the noise) is just a straight line, then the residuals should be just noise, no trend at all. They look like this:

I’ve added two solid lines to show two statistical models of these residuals; a quadratic fit in blue, and a linear spline (knot chosen by changepoint analysis) in red. Both provide test statistics. Both confirm (hands down) that these residuals are not just noise. The rate of sea level rise is *not* constant, in fact it *does* increase, and we call that “**acceleration**.”

Of course, the NOAA/STAR data reduction isn’t the only satellite data set. I’ve found five online, including from the University of Colorado, the Copernicus group in Europe, the European Space Agency (ESA), and CSIRO in Australia. We can do the same with the others: subtract a best-fit straight line and display (as well as statistically test) the residuals:

For all the satellite data sets, we can *see* the acceleration if we study their residuals from a straight-line fit, but we don’t trust “looks like” — we go with the statistical tests. They’re unanimous: **acceleration**.

Yet climate deniers continue to deny it.

The latest series of posts (about sea level) started with Dave Burton denying the existence of acceleration in tide gauge records from individual stations. We proved him wrong; he refused to admit his mistake. Our recent look at the satellite data from NOAA/STAR was because Kip Hansen claimed it not only failed to show acceleration, it demonstrated the absence of acceleration. We proved him wrong too, and he has yet to admit his mistake. As often happens with climate deniers, when confronted with actual analysis he resorted to claiming the data **he picked** wasn’t good enough to do the job.

This post was inspired by another recent WUWT error which touts the Jevrejeva data as the solution to their sea level acceleration problem. It’s just as foolish and mistaken as the others, but rather than dissect it I decided just to post about sea level, and what the tide gauge and satellite data sets have to say about it.

They’re unanimous. **Acceleration**. Especially recently.

**UPDATE:** By request, the data for my sea level reconstruction is here.

Thanks to the kind readers who have supported this blog. If you’d like to help, please visit the donation link below.

This blog is made possible by readers like you; join others by donating at My Wee Dragon.

Tamino

Many thanks one more time for all these very helpful explanations.

Two questions.

1. A few weeks ago, you published your own SL data for US coastal areas.

Would you publish your data for the Globe as well? I would enjoy comparing your work with my private evaluation, as I did with C&W, Jev and Dang. It is such an interesting corner…

2. I’d like to add VLM processing into my job. Apart from the ellipsoidal links in PSMSL (with rather few data)

https://www.psmsl.org/data/obtaining/ellipsoid.php

I found

– https://www.sonel.org/IMG/txt/vertical_velocities_table-4.txt

– https://doi.pangaea.de/10.1594/PANGAEA.889923?format=textfile

Which one would you propose? Or do you have a better choice?

Thanks in advance for your precious help

J.-P. D.

[

Response:See the update at the end of the post for a link to the data.]Thanks / merci / danke.

Excellent, particularly in the context of the critical review of the first difference method which I just re-read.

Not clear that much is to be gained, since there are clearly better ways, but I wonder if the first difference method could be rescued if the differences were applied to some kind of smoothed signal, reminiscent of the differences-of-Gaussian smoothing idea.

Tamino,

In the specific restricted case of only 2 straight lines, I suggest using this function: f(x) = a1 * x + b1 + sqrt( (a2*(x-x0))**2 ).

The slope changes from (a1-a2) to (a1+a2) at x0. I suspect that no change point analysis would be needed, since x0 is just an ordinary parameter in a least square regression. Do you confirm this? Is there any drawback?

BTW, on the raw NAAO data this leads to 2.5 mm/yr (resp. 4.3 mm/yr) before (resp. after) 2011.4, with about 5% uncertainty on each rate, and about 6 months on x0. Note that I did not try to do any serious statistical test.

Thank you.

The rate of acceleration appears to have gone up after 2005. Fifteen years is not likely long enough to be statistically significant, but increasing rates of melting of Greenland’s ice cap since 2005 would explain it.

Thanks for this very important analysis.

@FishOutOfWater00,

Just a note … Merely having N years of data in hand with N big does not suffice to show statistical significance, even if one accepts there is such a notion. (I don’t, but that’s another story.) It depends upon the magnitude of residuals between data and model. So, for instance, if the residuals are

reallysmall relative to Some Independent Criterion, you can get away with a smaller N than otherwise. If they bounce around and are large, resulting in a higher root mean square error, for a given Criterion, you need a bigger N. If they have serial correlation, you probably need bigger still, and you might suspectspecification errorin your model.Specification error isn’t always avoidable, just like

biasisn’t always avoidable. In fact, depending on the goal, problem, data, and situation, you might trade off greater bias for smaller variance. Trading off specification error is tricky: You can end upoverfittingif it is forced small.