Dave Burton has visited, and commented extensively on this post. He takes exception to the sea level data I used, and suggests that sea level has been rising at a steady, unchanging rate “since the late 1920s.” To quote him:
Neil, many locations have seen a little bit of acceleration “since the 1800s” — but not since the late 1920s.
Is that true? I’m skeptical.
I’ll make it even harder to find acceleration: I’ll use only data “since 1950.” If Dave Burton is right, it should be quite hard to find any tide gauge stations which show acceleration. I’ll start with San Diego. I could plot it in a way which makes the changes harder to see:
Or I could plot just the data since 1950, in a way which makes the changes easier to see:
The red line shows a lowess smooth fit to the data (with pink shading the 2σ uncertainty range). The smooth enables me to estimate the rate at which sea level is changing; quite useful, although it doesn’t provide a statistical test of whether or not the rate shows any change. To that end, I applied changepoint analysis to identify when the rate might have changed and test its statistical significance. That enables me to estimate the rate during two different time spans since 1950 — if the rate changed, then the “before” and “after” rates should be significantly different. Here are the results for San Diego:
The red line shows the rate estimated from the lowess smooth; the blue line the rate estimated from a piecewise linear fit. Note that after 2011 it seems to have sped up. Pink shading shows the uncertainty range (2σ) for the lowess smooth, dashed blue lines show the uncertainty range (2σ) for the piecewise linear fit.
I strongly suspect that the extremity of the recent rise at San Diego is largely due to local factors, not entirely due to global sea level acceleration. As for the issue at hand — is there acceleration (statistically significant) in tide gauge records after the “late 1920s”? — at San Diego, yes there is.
The same is true for the data from Key West, FL:
Likewise for Boston:
Likewise for St. Petersburg, FL:
Of course, not all tide gauge data sets show acceleration. You might think Honolulu does if you only look at the graphs:
But not so. The very recent rate might seem to be “significantly” higher than the rate before, but that’s an illusion caused by the fact that we get to choose the “changepoint time” to give us the biggest before/after rate difference. That’s the root of the “selection bias” problem; a proper statistical test says no, the Honolulu data don’t show statistically significant acceleration after 1950. It might look like it — but the numbers don’t provide real evidence.
As it turns out, it’s easy to find tide gauge stations which show statistically significant acceleration after 1950 (which means of course, they show it after the “late 1920s”). It didn’t search hard for them, I just looked at the first half-dozen tide gauge records I knew off the top of my head had plenty of data, and there they were. It might be harder to find a record which does not show acceleration, than to find one which does.
Why, then, does Dave Burton find it so hard to identify them?
A few possibilities suggest themselves. First: after the “late 1920s” many stations show both acceleration and deceleration (negative acceleration). If you test the post-“late 1920s” data, and you do so only using a quadratic fit to test for rate change, often the negative acceleration early and positive acceleration late will “cancel out each other” in that particular statistical test, giving the incorrect result.
There’s also the fact that the pattern of change at most tide gauge stations does not resemble a parabola. That makes a quadratic fit a weak test for finding acceleration/deceleration. The piecewise linear fit, for the data post-1950, seems to have good statistical power for detecting acceleration — but it does require the “changepoint analysis” approach to overcome the selection bias problem.
One last possibility: I get the impression that Dave Burton very strongly wants sea level to show no acceleration. This might make him tend to see only what he wants to see (or, not to see what he doesn’t want to see). It’s the age-old problem of “confirmation bias” that we can all fall prey to.
This blog is made possible by readers like you; join others by donating at My Wee Dragon.
On his Web site, Burton finds a single “global” rate of SLR by averaging all the gauges’ simple linear trends regardless of temporal coverage or missing data, which should tell us all we need to know.
“On his Web site, Burton finds a single “global” rate of SLR by averaging all the gauges’ simple linear trends regardless of temporal coverage or missing data, which should tell us all we need to know.”
I I used the entire PMSL tide gauge data set, and generated
– (1) two time series out of absolute vaues, one with direct averaging, one with these values distributed according to the stations’ coordinates into a 2.5 degree grid out of which a global time series again was generated:
You clearly can see what happens when you average all stations’ data together into the monthly time series without a preliminary gridding to avoid predominance of grid cells encompassing many more gauges than others.
– (2) a time series out of all accessible gauges giving sufficient data to compute for them separate anomalies wrt the mean of 1993-2013.
Like for the absolute values, these anomalies were then also gridded according to the stations’ coordinates:
– (3) a time series of the linear trends for 5-year distant periods, from 1883-2018 till 1998-2018.
If there was no acceleration, the entire curve would look like a straight line indicating a pure linear trend, wouldn’t it?
Btw, here is, for all stations having had sufficient data for 1993-2013, a sorted list of their respective trends (in mm/decade) for that period:
Burton will still tell you in 10 years that he is right.
I will be surprised if Dave Burton reads through and says, “yeah. I guess there is acceleration” because I doubt that he posts his ideas in good faith, in any meaningful attempt to understand the scientific data. I think DB posts his arguments in bad faith to serve political ends.
I have seen DB push these arguments on other sites and I have tangled with deniers who cite his “work” to support outrageous claims. This is an excellent take down.
Thanks for this analysis, Tamino, it’s very clear and easy to follow. One point I am confused on, though, is the situation in Honolulu. The error bars on the piecewise linear fit are very large after the “jump” but they don’t seem to overlap the error bars before the jump. I’m not clear on why that doesn’t mean there has been an acceleration in Honolulu.
[Response: Suppose we test a rate difference at some change point in time, at 95% significance, so it’s unlikely to hit by accident. But we didn’t just try one change point — I selected the one (of many possible) which gave the biggest before/after difference. That means we had *many* chances to exceed that 5% probability threshold. It’s like buying a lottery ticket with only a 5% chance of winning by accident — but if you buy 10 lottery tickets, your chance of winning at least one of them is a lot higher than 5%.
That, in essence, is the “selection bias” problem. For testing purposes, I found the true distribution (accounting for selection bias) by Monte Carlo simulations.]
SLR is not spatially uniform, because of complicated reasons having to do with topography, current flows, Earth’s rotation, etc. Because the oceans haven’t had time to complete their cycle since serious warming from forcing has occurred, all that heat is in the top 1000 meters, and most of it in the top 500 meters. Indeed, even ENSO doesn’t involve thermal stratification below about 300 meters:
This means thermal energy flows with currents. With mesoscale eddies and the like, this steric SLR can vary, not only with time, but strongly with locale.
The effect of averaging all gauges together is to wash a lot of this out, and to understate it. That’s because it’s based upon a bathtub conceptual model of SLR and, as I’ve described, that’s just wrong.
“The effect of averaging all gauges together is to wash a lot of this out, and to understate it.”
That is, from my layman’s point of view, a bit too simple.
It is the same as to pretend that local temperatures (or sea ice extent and area amounts in grid cells) can’t be averaged to regional or global ones.
The best way to understand how wrong such an assumption is, is to look at the amazing relation between
– an average of ten thousands of weather station temperature anomalies
– a global averaging of O2 microwave emissions from the lower atmospheric layers.
Looks like this:
(Sources: GHCN daily, UAH6.0 LT over land)
Your opinion reminds me a paper written over a decade ago:
That was so pretty pretty good.
(Feel free to search for an existing pdf file.)
People processing averages of data do not understate anything. They just want to see how these averages look like, and how they behave when compared with other averages.
All around the world, thousands of engineers interpolate and average, without anybody noticing that. With one exception: a corner I like to name ‘climate pseudoskepticism’.
ecoquant | October 12, 2019 at 10:54 pm
And? What’s your point?
Do you think I’m the guy you might impress with a few mudo pictures?
Look at this:
That is what we might talk about.
Those aren’t mine, those are from papers by people who know something, the first being in a major peer reviewed journal. Moreover, it’s a result that’s pretty impressive, fusing together several data sources, and then correcting for the things you always need to correct, producing a consensus, and then throwing stones at your own work to be sure there’s a there there. It’s called science.
You just have an uninterpreted table of numbers. If that’s all you’ve got, Mathematica Alpha is better than you are.
My unscientific take. As I understand it, the uptake of heat in the oceans has been accelerating, i.e. the oceans have been warming more rapidly. In this case, isn’t it impossible for sea level rise not to be accelerating? When we add increased runoff from melting glaciers and ice-sheets, that just adds to the effect.
However, Dave Burton at least acknowledges that sea level has been rising for a long time. What is his explanation of that rise?
Given the response of sea level to comparatively minor heating, for example,
I’d say it’s impossible to dump as much heat into the oceans as we are and not see a pronounced effect. Accordingly, if heating is accelerating, so is SLR.
One less appreciated effect which Professor Susan Solomon addressed in her keynote at MIT last week
The relevant portion begins at 15:30 in the talk where the link starts, she has slide at 16:50 which is pertinent, and she delivers the whammy beginning at about 17:00, which is
By “stabilize” Professor Solomon means you stop increasing atmospheric CO2 concentrations. So, given that the instantaneous partial (derivative) of SLR (or, for that matter, temperatures) with respect to CO2 concentrations at that point is infinite, you got some serious acceleration there.
“So, roughly, twice as much warming … is in the pipeline, even after you stabilize.”
Won’t that warming just maintain the current temperature instead of increase it?
It’ll plateau as long as no additional CO2 inputs once the pipeline is exhausted, that is, after the 50%-100% increase. The points are (a) there is a lag before equilibration, and, (2), more importantly, Earth won’t cool if GHG emissions are stopped for centuries.
Stopping GHG emissions keeps the equilibrium from getting worse.
As Prof Solomon noted in her talk, based upon her own work with colleagues, and upon independent work by Prof David Archer and colleagues, many people who know about climate change don’t know about this. It’s been known since before 2010.
This is why, in part, we need to stop emitting so urgently.
The “stabalisation” of CO2 levels is hopefully not where we are going.
Zero net emissions will see CO2 levels begin to drop so Susan Solomon’s extra warming on a millennial time-scale for constant CO2 will be replaced by roughly constant temperature with falling CO2 levels.
But even at constant temperature, SLR remains a multi-millennial problem.
IPCC AR5 Fig 13.14 provides estimates of SLR against global temperature rise for 2,000 years (RHS) and 10,000 years (LHS). (The various graphs show contributions from (a,f) Thermal expansion, (b,g) Glaciers, (c,h) Greenland, (d,i) Antartica, & (e,j) All sources.) The big difference between the 10K and the 2K is the melt-down of Greenland which kicks-off with a level of warming somewhere between +1ºC and +2ºC – a very good reason for reducing the upper limit of AGW to +1.5ºC.
Incredibly given the subject being addressed by this OP, the delusions of Dave Burton, the man himself professes only to have recently learned of the existence of this OP and I see he’s still trolling it up-&-down the previous thread.
If this crazyman daveburton were to appear in this thread, I thought to welcome him with something of a challenge – the acceleration in SLR which sits patiently within all that data he professes so loudly has no acceleration.
The data section of his website provides tabulated statistics of various collections of SLR data. The top collection is NOAA’s 2015/2016 list of 375 Long Term Trend (LTT) tide stations, this data including values for ‘Trend’ and ‘Accel’. While there is no explanation of the calculation used to arrive at ‘Accel’, presumably it is in mm/year/year and calculated over the full record for each the 375 values. And helpfully the table provides an average of these 375 values at +0.0524, If this is indeed mm/yr/yr, that is quite an acceleration for such lengthy records given estimates of todays acceleration in SLR which daveburton tries so ard to deny is just twice that value at 0.1mm/yr/yr.
And we can burrow a little deeper as these 375 accelerations can be averaged for differing length of record, yielding the following table:-
So, while these results do use data of unknown provenance, how can anybody but a crazyman blithely ignore the signs of accelerating SLR through recent decades shown (abet shown naively) within the data presented by daveburton’s website?
Al asks, “how can anybody but a crazyman blithely ignore the signs of accelerating SLR through recent decades shown (abet shown naively) within the data presented by daveburton’s website?” I don’t think it’s hard to ignore the data if the ideology and commitment to same is sufficient. I don’t bother to engage much with the ideologues for the same reason I don’t wrestle with pigs: you both get dirty and the pig likes it.
Feed trolls and wrestle with pigs if that is your thing.
I will say that when I do engage with the ideologues, I work to engage respectfully, no name-calling, etc. Sarcasm? yes. Snide humor? sometimes. Name-calling? Nah, has no appeal to me.
Tamino wrote, I’ll use only data “since 1950.” … I’ll start with San Diego…”
Why 1950? We’ll come back to that.
First of all, here’s San DIego’s sea-level record:
As you can see, there’ve been >112 years of continuous measurements, and still no detectable acceleration.
To the eye the trend looks straight as an arrow. As any engineer could tell you, that means there’s been no practically significant acceleration.
Linear regression finds a linear trend of 2.176 ±0.184 mm/yr, and quadratic regression finds an acceleration of 0.00879 ±0.01259 mm/yr², which is neither statistically nor practically significant.
If you ignore the CI (which you should never do!), then a century of continuous acceleration at 0.00879 mm/yr² would raise sea-level by a completely negligible 1.7 inches.
If you discard the first 44 years of data, as Tamino did, and start the regression in 1950, you find a linear trend of 2.331 ±0.463 mm/yr, which is nearly identical to the full record (except with a larger uncertainty, of course), and an acceleration of 0.0259 ±0.0517 mm/yr², which is again insignificant.
So, you might wonder, what’s wrong with Tamino’s approach?
To answer that, first we need to understand what drives short-term sea-level trends at San Diego (and many other places). Here’s a little composite graph that I made a few years ago, which shows San Diego (eastern Pacific) vs. Kwajalein (western Pacific) vs an ENSO index:
Do you see it? During El Niño, easterly trade winds diminish and the entire Pacific Ocean sloshes east, raising sea-level at San Diego, and lowering sea-level at Kwajalein. During La Niña and ENSO-neutral conditions, especially of long duration, the opposite happens.
So, let’s compare the measured sea-level plot to what Tamino did, in his effort to find evidence of acceleration. I’ve circled some of the El Niño peaks:
Do you see it? Tamino didn’t measure changes in long-term trend, he just measured ENSO slosh.
That big run-up at the end of his graph corresponds to the transition from the strong La Niña of 1999-2000, to the big 2015-16 El Niño.
The “LO” in “LOES” stands for “LOcal,” and the LOES “span” parameter controls just how “local” it is (i.e., how many years of history are used in the computation). If you use too short of an interval, e.g., 15 years, you’ll find spurious trends due to things like ENSO.
So, how much data should we use? Fortunately, there’s a pretty good consensus about that in the literature. The answer is that it takes about sixty years of tide gauge data to calculate a robust trend, which is not overly distorted by cyclical and quasi-cyclical factors like ENSO and AMO.
For example, here’s Zervas (2009), NOAA Technical Report NOS CO-OPS 053, Sea Level Variations of the United States, 1854 – 2006, p. xiii:
Likewise, here’s Wenzel & Schröter (2014). Global and regional sea level change during the 20th century. J. Geophys. Res. Oceans. (See the Concluding remarks.) doi:10.1002/2014JC009900.
Trying to calculate trends from short records, which is effectively what Tamino did with his short-span LOES, is simply a mistake. Here’s AR5 Chapter 3:
I bolded part of that quote to point out another problem with what Tamino did. Starting with 1950 is essentially a cherry-pick, which helps create the illusion of acceleration at some locations.
Even though CO2 levels have increased by 33% (more than 100 ppmv), and CH4 levels have increased by about 79%, the best long measurement records show that there’s still been no detectable, significant, sustained acceleration in rates of coastal sea-level rise since the 1920s.
[RESPONSE: You will have my answer soon.]
Just a small correction, my understanding is that El Niño has nothing to do with winds, but, rather, lunar tidal forcing. @WebHubTel (Paul Pukite) who comments occasionally at ATTP and sometimes here is an expert on this, and the above is a link to some of his great work on the relevant oscillations. He also cites the 2019 paper by Lin and Qian which makes the same argument he has for a long time.
As far as the rest of it goes, I’ll leave that to Tamino, except to say that for responses of dynamic systems, lines are rarely appropriate, simply because their residuals are correlated.
Thanks Jan, The cause vs correlation issue with wind is that a wind will also result from the enormous pressure differential set up by the ENSO dipole (see SOI which is the pressure component of ENSO). Lin & Qian are showing that the first signs of the dipole emerge from the subsurface ocean waves which are likely driven by a tidal forcing. The paper is open
Lin, J. & Qian, T. “Switch Between El Nino and La Nina is Caused by Subsurface Ocean Waves Likely Driven by Lunar Tidal Forcing.” Sci Rep 9, 1–10 (2019).
Of course the monotonic sea-level rise is a different mechanism
A “slosh” by definition is a cycle–though not necessarily regular and neatly periodic–with no trend. That is, If the water is higher in one part of a basin due to a slosh, it must be correspondingly lower elsewhere to make up for that fact.
How, then, do higher tides in, say, San Diego have any affect on the the linear or higher order trends in the basin as a whole in any way whatsoever?
“How, then, do higher tides in, say, San Diego have any affect on the the linear or higher order trends in the basin as a whole in any way whatsoever?”
None really on the trends since El Nino’s revert to a mean of zero (similar to tides in that respect).
There’s a interesting aspect about the El Nino “slosh” which is about 150 cm in amplitude according to the San Diego tidal records. Associated with El Nino is a pressure differential which can be over 100 hPa, and with a behavior known as the inverted barometer effect, this will cause a 1 cm sea level change with every 1 hPa change in atmospheric pressure. So most of the apparent sloshing is associated with the El Nino atmospheric pressure differential.
Most of the actual sloshing in an El Nino event takes place at the subsurface thermocline where the thermocline moves by 10’s or 100’s of meters which is the mechanism whereby cold waters are drawn closer to the surface.
I repeat: here are the trends from 1883-2019 till 1998-2019:
1883: 1.40 ± 0.02
1888: 1.45 ± 0.02
1893: 1.49 ± 0.02
1898: 1.54 ± 0.02
1903: 1.59 ± 0.02
1908: 1.60 ± 0.02
1913: 1.68 ± 0.02
1918: 1.75 ± 0.02
1923: 1.78 ± 0.02
1928: 1.79 ± 0.02
1933: 1.78 ± 0.02
1938: 1.74 ± 0.02
1943: 1.69 ± 0.03
1948: 1.67 ± 0.03
1953: 1.76 ± 0.03
1958: 1.88 ± 0.03
1963: 2.05 ± 0.04
1968: 2.15 ± 0.04
1973: 2.33 ± 0.05
1978: 2.55 ± 0.05
1983: 2.78 ± 0.06
1988: 3.03 ± 0.07
1993: 3.07 ± 0.09
1998: 2.96 ± 0.12
You won’t change anything to that.
People should know that Mr Burton is a member of the CO2 Coalition (search for his name), which not only promulgates climate denial and opposes measures to respond to SLR in places like North Carolina but us also funded by Koch Charities and others like the Sarah Scaife Foundation.
Accordingly, my prior on his trustworthiness in communicating anything accurate about science Iet alone climate is very low. He hangs out with 45 climate toady Happier and others.
In addition to responding to what Tamino will offer, scientifically speaking, you also need to respond in detail to the results of two additional and recent papers,
(1) S. Yi, K. Heki, A. Qian, “Acceleration in the global mean sea level rise:
2005-2015”, 2017, GRL.
(2) J. Feng, D. Li, T. Wang, Q. Liu, L. Deng, L. Zhao, “Acceleration of the Extreme Sea Level Rise Along the Chinese Coast”, 2019, Earth and Space Science.
This figure is from the first:
And this figure is from the second:
Thanks for these links to valuable information.
I’m not at all fixated on info exclusively showing sea level rise!
But since the publication of Church & White
Sea-Level Rise from the Late 19th to the Early 21st Century
nobody managed to scientifically, accurately contradict them.
I suppose that people like Burton never have really read their paper with the due attention, and thus comfortably stay in ignoring the paper’s very first sentence:
“We estimate the rise in global average sea level from satellite altimeter data for 1993–2009 and from coastal and island sea-level measurements from 1880 to 2009. For 1993–2009 and after correcting for glacial isostatic adjustment, the estimated rate of rise is 3.2 ± 0.4 mm year−1 from the satellite data and 2.8 ± 0.8 mm year−1 from the in situ data.”
What I don’t understand is this:
If GIA is, as explained by so many ‘skeptic’s, such a major factor, how then is it possible that a raw layman’s evaluation of raw PMSL data, deliberately ignoring this factor, keeps so near to C&W’s professional evaluation, taking this GIA (and lots of other factors) very well into account?
Here is a comparison of the linear estimates for consecutive 5-year distant periods, from 1883-2013 till 1993-2013:
My guess: the differences in the two charts, between the red and black plots, are more due to the layman’s raw data processing than to GIA etc having been left outside.
If the ‘layman’s evaluation’ included a reasonable number of tide gauges and was representative of global, you would expect the data to run close to GIA-adjusted data. I’d speculate more about how his trace turned out to be more wobbly.
The GIA adjustments for tide gauges are almost all a constant rate but the rate does vary greatly geographically.
However, one way the non-GIA-corrected graph can gain wobbles that are absent from the GIA-corrected graph could be due to a change in the number of tide gauges within the global sample through the period of analysis, change which alters the average GIA-correction. So if you add in a bunch of tide gauges in lower GIA-correction areas, your non-GIA-corrected trace would drop relatively and if then you added in yet more tide gauges but generally from high GIA-correction areas, your trace would rise.
You can do running means if you want, but that leaves choice of window size as an arbitrary parameter and neglects the question of what if different window sizes are appropriate for different times. I’ve in fact just recommended that he use a different approach and let the data dictate.
I have myself looked at Church and White-style data elsewhere, although not the smoothing spline approach just recommended to Burton. And if you look there, you’ll see that on average there is an acceleration, although, it comes in fits and spurts.
I’d say, and this should be familiar from this blog, merely fitting and judging by eye that something looks close is innumerate: You need to quantify how close the fit and why. As I keep saying, looking at the character of residuals is what it’s all about.
Fitting a smoothing spline using something like generalized cross validation or the RTS filter takes care of all that by construction.
Al Rodger | October 14, 2019 at 2:25 pm
Thanks for you very interesting reply.
1. “If the ‘layman’s evaluation’ included a reasonable number of tide gauges and was representative of global, you would expect the data to run close to GIA-adjusted data.”
The major aspect was a raw evaluation, and that requested all PMSL tide gauges to be considered, regardless their level of completeness. There were 1513 when collecting absolute PMSL data in May 2019.
But for the anomaly construction, I used a scheme of 4 reference periods within which only those stations were considered which had sufficient data for that construction. For e.g. the period 1993-2013, you have about 670 stations. All stations with a shorter life time or too many wholes in the record, leading to empty months in the baseline, were excluded.
2. “So if you add in a bunch of tide gauges in lower GIA-correction areas, your non-GIA-corrected trace would drop relatively and if then you added in yet more tide gauges but generally from high GIA-correction areas, your trace would rise.”
Maybe! But again: I think that my very simple, raw evaluation, lacking for example any search for homogeneity, had far higher influence on the output than including (or not) factors like GIA or subsidence.
Please have a look at the bindidon/C&W plots again:
You see that their historic parts highly differ because the paucity of data in that period led to much higher deviations in the monthly anomalies of my evaluation than in C&W’s professional work.
Perfect anomaly computation is hard work :-)
This old article from Real Climate also concludes that sea level rise has accelerated when using just about any starting date since records began.
Mike Roberts | October 14, 2019 at 4:26 am
But even more interesting is that at the time Stefan Rahmstorf published the head post about
-an article written by Houston / Dean:
– and his / Vermeer’s answer:
Click to access rahmstorf_vermeer_2011.pdf
he was noticed about again an answer by Houston / Dean:
The sequence is interesting, as it shows us that like in the context of temperatures, people doubting about rising sea levels always select measurement points existing since quite long a time, and therefore automatically exclude worldwide a considerable amount of stations, and thus keep – intentionally or not – only historic sites mostly located in the US, Europe or Japan.