Before the satellite era, the best data we have about sea level comes from tide gauges. They give local sea level, which is the difference between the height of the sea surface and the height of the land (it can move up and down too). It is possible — but very complicated — to combine data from tide gauges around the world in order to estimate how global mean sea level (GMSL) has changed over the past century-and-a-half or so.
The two best-known such estimates come from two different teams of researchers; one from Church & White (which I’ll refer to as “cw”), the other from Jevrejeva et al. (which I’ll refer to as “jev”). Let’s compare them. Here they are:
I’ll draw your attention to how they compare from 1930 through 1990 (the end of 1989), in particular how they show very different trends. For each, I’ll fit a piecewise-linear model allowing for a trend change in 1960. Doing so gives this:
The cw data (Church & White) show very little trend change at 1960; the “before” and “after” rates of increase are 1.95 and 1.58 mm/yr. But the jev data (Jevrejeva et al.) show a dramatic trend change, rising a whopping 3.26 mm/yr before 1960 but only 0.94 mm/yr after. Which is more correct?
To investigate, I’ve been looking at individual tide gauge stations. The cw data suggest that when comparing the 1930-1960 time span with 1960-1990, the rate of increase goes down slightly, by 0.36 mm/yr, while the jev data suggest a very large drop of 2.32 mm/yr (over six times as large). What do individual tide gauge stations say about that?
I scanned the data available from PSMSL (the Permanent Service for Mean Sea Level) to determine which of them have sufficient data to compare those two time intervals properly. The total span of 60 years (1930 to 1990) covers 720 months, so I identified those stations with at least 660 months’ data during that time span. There are 102 of them, located like this:
Most of them are in Europe and North America, simply because most tide gauge stations (and especially those with long enough records) are there, but there’s a smattering of stations in other parts of the world.
For each of the 102 “enough-data” stations I computed the difference between the 1930-1960 trend and the 1960-1990 trend. Recall that the cw data say the global average decrease was 0.36 mm/yr while the jev data suggest 2.32 mm/yr. Here’s a histogram of the decrease as estimated at individual tide gauges:
Two of the stations show extreme rate changes; examination of their individual graphs shows clear discontinuities such as come from earthquakes which can dramatically alter the height of the land. So I eliminated those two from consideration, leaving 100 tide gauge stations with at least 660 out of 720 months’ data during the 1930-1990 period, and without obvious discontinuities which invalidate their use for global sea level estimation.
And what do those 100 stations say? The average trend rate decrease for all 100 stations is 0.32 mm/yr. That’s quite close to the estimate from the cw global data of 0.36 mm/yr but nowhere near the estimate from jev data of 2.32 mm/yr. That fact argues very strongly that the cw global estimate is doing a much, much better job than the jev global estimate.
What about their locations? Here are the locations of the 100 tide gauge stations, with those showing a change bigger than 2.32 mm/yr in blue and those showing less change in red:
By far most of the stations (86 out of 100) show less change than the jev data, which argues that the jev data are giving a false impression of the change in the rate of sea level rise around 1960. Their locations don’t contradict that conclusion either; those stations are isolated, apparently at random, except for the fact that there are a number of them along the northeast coast of the U.S.
It seems to me, this is nearly conclusive evidence that the jev data are seriously flawed. In particular, they point to a large decrease in sea level rise which is not only contradicted by the cw data, it is also contradicted by close examination of individual tide gauge stations.
How then did the jev data reach this conclusion? In my opinion, it’s because of two serious analytical flaws. First, the “virtual station method” puts way too much emphasis on a small number of individual stations, enabling those very few which show the large change to dominate the vast majority which don’t. Second, using the “first-difference method” (actually a modified form of it) so greatly increases the influence of random noise that it alone makes the jev reconstruction unreliable (see this).
All of this means that we should be using the cw data, not the jev data. Let me make one thing clear: that does not mean that Jevrejeva et al. are incompetent. The “virtual station method” was an ingenious solution to a problem that needed addressing (essentially, area-weighting). The fact that it can overemphasize a small number of stations is a flaw, but when smart people invent new methods it’s all too easy for honest and intelligent researchers not to grasp all its implications right off the bat. The fact that the “first difference method” (which was well known even before their research) is tremendously flawed is something that was missed by nearly everybody. I myself considered it one of the best way to align stations’ data until I looked very closely into the matter.
Jevrejeva et al. aren’t fools, and in no way are they dishonest, in fact they did a great deal of work and identified important issues which we can’t ignore, making great progress in advancing our understanding of historical sea level rise. The fact that there were unknown flaws in some of their methods and that subsequent research has done a better job of it — that’s just science.
Unfortunately, climate deniers seem to know only two possible explanations for scientific data: either it supports their world-view, or it’s some kind of fraud. Real scientists know that research can arrive at mistaken conclusions, not because of some global conspiracy to destroy America, but because science is difficult, complex, intricate, and we don’t always get everything right the first time.
I suspect that among scientists the Jevrejeva et al. data will fall out of favor because it has demonstrable flaws. Among climate deniers, it will remain a favorite because it supports a tiny part of their climate-denier worldview. In my opinion, their support for purely ideological reasons is a genuine insult to the efforts of Jevrejeva et al. Criticism of their work for purely scientific reasons is how real science works, and I strongly suspect that Jevrejeva and colleagues would agree.
This blog is made possible by readers like you; join others by donating at Peaseblossom’s Closet.
Illuminating. I’ll have to ‘bookmark’ the fact that ‘jev’ is problematic. It’s sure to come up, sooner or later.
Is it possible for the Jevrejeva data to be redone, avoiding the first-difference method?
[Response: Yes. But it would require a great deal of work. I would also suggest finding an alternative to the “virtual station method.”]
Tamino is likely already aware of this post, but for others it is worth taking a look at this RealClimate post that discussed the Jevrejeva et al (2008) paper and the peculiarities of its “virtual station” method:
Two things strike me with the Jevrejeva et al.result
The step around 1855 seems improbable thats a hell of a lot of water to apportion somewhere other than in the oceans.
The data seems noisy before 1880 or so when C&W start their reconstruction. Did C&W reject the validity of using older records?
I know Neil White sometimes comments here it would be interesting to see his take on the reason for the differences .
Seek and ye shall find the answer grasshopper.
“In the 1860s there are only 7–14 locations available, all North of 30°N. In the 1870s, there is one record available South of 30°N but still none in the southern hemisphere and it is only in the second half of the 1880s (Fort Denison, Sydney, Australia starts in January 1886) that the first southern hemisphere record becomes available. While we attempted the reconstruction back to 1860, the results showed greater sensitivity to details of the method prior to the 1880s when the first southern hemisphere record is available (see below for further discussion). As a result, while we show the reconstruction back to 1860, we restricted the subsequent analysis (computation of trends, etc.) to after 1880. The number of locations with data available increases to 38 in 1900 (from 71 individual gauges), including several in the southern hemisphere, to about 85 locations in 1940 (from 130 individual gauges but with still less than 10 in the southern hemisphere), and to about 190 in 1960 (from about 305 individual gauges with about 50 locations in the southern hemisphere).”
Two points, one about spatial sampling, the other about first differences …
First, ignoring local effects of oceanic currents, to the degree any sampling locales are clustered together, they are not really providing as independent estimates of an observational value as one might like. Assuming the underlying phenomenon being observed shows continuity in space, this means that, all other things being equal, two points closer together geographically will track one another more readily over time than will point farther apart. Many geophysical series show sampling biases of convenience, e.g,. paleomagnetic measurements in streambed walls not located at tops of mountains.
To compensate for these biases, the natural thing to do is downweight an observation in proportion to the number of other sampling stations which are within various rings of distance about the observer station. There are a number of ways of defining this, but the point is that if two observers are close together, they covary and, do, are not really offering two independent observations for the phenomenon.
Second, in general, every first differencing is best done by smoothing the signal first and then differencing it. Otherwise, as you note, the differencing will simply amplify noise.
RealClimate (I think Stefan) raised issues with Jevrejeva’s method years ago. In their criticisms of the IPCC/sea level science, Professors Curry and Koonin are heavily reliant on Jevrejeva’s 20th-century reconstruction. I once asked her if she had ever actually communicated with Jevrejeva, and she said no. She’s read the Sönke Dangendorf and Carling Hay papers on 20th-century SLR, but seems unable to let go of Jevrejeva’s high rate of SLR in the 20th century.
I thought that the tide gage reconstructions used at least a GIA model (e. g. Peltier).
I can now see how difficult a tide gage reconstruction can be. For example, do a CDF of ocean area versus latitude, starting at the SP and going to the NP. Now do a CDF of the tide gages in a similar manner. Compare the two. Not very close at all. Do the same with longitude. Now combine the two for a CDF of ocean area versus number of tide gages (radial distances, say starting in Northern Europe).
So, the SH has only six gages (in this particular exercise) as shown in your last figure, but is ~80.9% ocean. While the NH has 94 gages, but is only ~60.3% ocean (gages predominately on the east and west coasts of the North Atlantic Ocean)..
Key West, the 10th in latitude gage is at ~24.5N latitude (GOM).
You would have to be really sure of the tide gage records in sparse locations. Need to read the Hay (2015) paper again …
I like the Hay et al. approach as well. Figure 2 shows an even slower 1901-1990 sea level rise than Church & White.
One of the neat things about Hay et al. is that it generates a full description of sea level rise along every coast (unlike Jev & CW that only generate the global average). This makes it possible to run sampling approaches that simulate the Jev and CW approaches in a world with complete information, that shows that these approaches can lead to overestimates…
[Response: Jevrejeva et al. generated a reconstruction for separate “ocean basins” as well as a global composite.]
Responses to some of the points raised above:
Firstly, the cw method does produce global fields, although in the later papers there is more emphasis on the GMSL time series. An earlier paper:
is mainly about regional patterns. This paper also has some discussion of the GIA corrections that were then available. GIA fields from Jerry Mitrovica’s group were used in this paper and also for cw 2011.
PS that link looks garbled to me – googling something like: church et al 2004 should work.
I would be concerned about isostatic rebound effects for data from north of 45° along the Pacific Northwest coast. Similarly for the east coast of North America and then also Europe, although the latitude for rejection depends upon the regional geologic history.
What happens if all those northerly stations are censored?
[Response: Not sure, but you would lose a large fraction of the available stations. Both C&W and Jevrejeva et al. apply an adjustment for GIA, it’s really necessary to get things right. I’m working on a new way to do that, which doesn’t require knowing what the GIA adjustement is.]
I used the Hay (2015) paper as a ‘so called’ anchor paper in a Google Scholar seach (papers that reference Hay (2015), this builds a search tree as the subsequent most relevant papers branch off with into their own (ofttimes overlapping) references tress). This generated 30-40 relevant papers (plus the usual 10+ crank papers from GMSL deniers, quite easy to filter out if you are at all familiar with their names).
One key paper was Hay (2017) …
On the Robustness of Bayesian Fingerprinting Estimates of Global Sea
Level Change (open access)
This updated paper addresses one critique of Hay (2015) paper, the preponderance of a large number of high Arctic tide gages (see their Figure 1 where subsets of the full 622 tide gages used in Hay(2015) were taken that matched three other papers (CW2011 was one of those papers).
There is another group of papers that use SONEL (ULR6a) instead of GIA (e. g. Dangendorf(2017)), they also get a 19th century (1900-1990) of ~1.1 mm/yr.
Hamlington (an ODU academic and a coauthor on the Nerem(2018) paper) has about four (or five) papers, which, to date indicate 1.5-1,7 mm/yr for 20th century (again 1990-1990), These papers AFAIK consist of only ~15 long term ‘so called’ high quality tide gages.
Finally, there a group of papers that constrain their sea level reconstructions from 1950 (or 1960 or 1969) to present, as it appears that the early 1900-1949 (or 1959 or 1969) eras appear to be the time period where the 1.1-1.2 mm/yr and the 1.5-1.7 mm/yr papers appear (e. g. the early part of the 19th century appears to present the greatest difficulties in ‘so called’ consensus (e. g. similar sea level) reconstructions).
Jevrejeva and colleagues continue to publish reconstructions into 2018 …
A Consistent Sea-Level Reconstruction and Its Budget on Basin and Global Scales over 1958–2014
(paywalled, need to wait a year from AMS publications (or find another method like ask the authors or …))
Frederikse (lead author of the above paper is a recent PhD), his thesis is here …
Sea-level changes on multiple spatial scales: estimates and contributing processes
[Response: Thanks for sharing such an excellent list of resources.]
Nicely, nicely done, @Everett F Sargent !
This is the exact list I was pointing out on CargoCult Etc. They stuck with the cult, which almost completely based upon Jevrejeva’s very high rate of SLR from ~1920 thru WW2.
But recently a skeptic, WE of WUWT, tossed C&W back at Gavin on twitter.
Tamino, as always I appreciate your analysis. I noticed in your detailed assessment your graph stops at 1990. But in looking at the full data set it appears both groups are showing a significant increase in since 1990. Hopefully that will be your next blog entry as 1 mm/year is not of much concern but if seal level rates are accelerating that will be a problem. Thank you