For years, the Dave Burtons and Judith Currys of this world have shown a graph (from NOAA) of sea level measured by a single tide gauge at one location, followed by proclamations of “no acceleration” and/or “sea level rise has been steady.” They choose one for which the visual impression given by the graph supports that idea, whether the numbers do or not, especially since NOAA conveniently adds a best-fit straight-line to their graphs of tide gauge sea level, and putting a straight line on the graph plants the idea of straight-line trend (i.e. constant rate of sea level rise).
If you do the math, you often find that “steady rise” just ain’t so — demonstrably, the evidence contradicts the visual impression. That’s the way statistics is, and I’ve often emphasized that “visual impression” can err in either direction, suggesting a false positive or obscuring a true one. As for the straight line plotted on their graphs, NOAA makes it clear that they do not do this to claim or imply that sea level is following a straight line, it’s just to show what the best-fit straight line is — still, it’s hard not to get the idea anyway.
Then there’s the fact that the noise level in the data from a single tide gauge is very high, compared to the noise level in global mean sea level (GMSL). That makes it harder to detect and confirm acceleration even when present; the signal (in this case, change in the rate) has to be strong enough to achieve a sufficient signal-to-noise ratio for “statistical significance,” and that usually requires either a very big rate change, or persistence for a long time, for the accumulated signal level to compete with such a big noise level.
The noise itself is not the simplest kind (white noise). It’s strongly autocorrelated, which must be compensated for when testing for statistical significance and computing the uncertainty of estimated parameters. This is an issue which features prominently on this blog, and while NOAA uses an AR(1) model for the noise (it’s the usual approach in science today) I’ve championed the use of a more complex ARMA(1,1) noise model, because I think even the AR(1) model makes it too easy to think you’ve got statistical significance when you don’t, and gives you too much confidence in your parameter estimates.
Some statistical models — including some of my favorites — involve a change which happens at a particular moment. It complicates the statistics rather extremely, because there are so many moments to choose from, it dramatically increases the chances of finding one that makes your sought-after pattern look good (even to statistical tests) but it’s only dumb luck because you tried (or could have) so many times. In fact, the “moment” is usually deliberately chosen because it suggests itself — which is truly begging the question. The issue is sometimes called “selection bias,” sometimes referred to as the “multiple testing problem,” and figured prominently in the debate over the non-existent “pause” in global temperature. It can be compensated for — that’s the essence of change-point analysis — and if it’s too messy to compute what you need theoretically, celebrate living in the high-speed-computer era when Monte Carlo simulations make that easy.
And let’s not forget that the way the data are graphed, and the scale chosen, has a profound effect on the visual impression. It’s usually straightforward to choose the view which gives the impression you want to give, or confounds the one you don’t like.
So why, some might wonder, did I choose in my last post to show the data from a single tide gauge station (Wilmington, NC), do no analysis at all, but manipulate the graph to give a distinct visual impression? It wasn’t much of a manipulation — all I did was plot the pre-2012 data in black and the post-2012 data in red — but it did the job.
True answer: because I thought it had the best chance to make Dave Burton say (to himself), “Ouch.”
I had already “done the math.”
The blue line shows the best-fit parabola, the red line the best-fit PLF (Piece-wise Linear Fit). I tested both models as the “alternate hypothesis” against the “null hypothesis” that it follows a straight line (constant rate of sea level rise). Yes I compensated for autocorrelation and selection bias. Both tests return a clear result: statistically significant.
By no means does this demonstrate that the data are following either of these models, parabola or piece-wise linear. Statistical significance doesn’t confirm the alternate hypothesis, it rejects the null hypothesis. The real result of both tests is: sea level is not rising at a constant rate, rather it has accelerated, at Wilmington NC.
I’ll address some other issues that came up in comments to my previous post.
First of all, I didn’t “hand-pick” the very limited time span to show in red (rather than blue) in my graph of sea level at Wilmington, NC. It was dictated by the North Carolina state legislature passing HB-819 in 2012. The station was hand-picked — but not by me, by Dave Burton.
The nearby tide gauge station at Beaufort, NC, shows the same response to the same statistical tests:
As for satellite data, I used it to estimate the straight-line trend using only the data prior to 2011, then I subtracted that trend from all the data, leaving these deviations from the pre-2011 trend:
I don’t think that increase post-2011 is just fluctuation caused by el Niño. I do think that some people, especially Dave Burton, try to blame el Niño for most things they can’t explain.
This blog is made possible by readers like you; join others by donating at My Wee Dragon.
Good example of how to test for specification error.
According to one clown named James Pace, “a good portion of the sea level rise is due to irrigation”! https://www.quora.com/How-will-climate-change-shape-the-world-in-the-centuries-to-come/answer/Erikassimo?comment_id=233652574&comment_type=2 SMH.
There is an old precomputer trick for testing if a straight line is the best fit. Take a piece of glass with a straight edge and make the best eyeball fit to the middle of the series. If the beginning and the end of the data series are above/below the straight edge, you have something else.
Pretty much the same story when someone forces a straight line fit, look at the ends of the series, are they above or below the straight line.
Eli, what happened to your blog? There’s nothing since the post on Afghanistan.
I think Eli is enjoying the benefits of retirement from academic life. He “finally handed in the keys”.
Excellent Eli!
Ah, the signs of years of experience.
Another “precomputer” trick for straight line fits is knowing that the regression line must pass through the point (mean X, mean Y). So, if you know your mean X and Y values, plot that point on your graph, and then place your straight edge through that point and rotate the straight edge (keeping it on that point) to your “best fit by eye”.
Exactly, and that also shows how the error is always worse at the ends (Plotted properly the error limits should be a dog bone)
It’s always fun to teach Tamino how to suck 1000 year old eggs. . . :)
Keep in mind that Wilmington is about 25 miles from open water. Precipitation has increased there in recent years. In 2016 alone, total rainfall at Wilmington was 60-70% above the historic mean. Increased flow in the Cape Fear River would raise tide measurements.
Try plotting the 3-yr mean tide-gauge reading at Wilmington versus year and the 3-yr mean rainfall versus year. You will see that the deviations from the tide trend are highly correlated with deviations from mean rainfall. Tamino, can you try to remove the rainfall effect from the tide data?
Tor Ole Klemsdal made a good point when he asked for a change that might have caused the recent increase in mean tide. Of course, the rainfall change could result from AGW, but the recent rate of increase in rainfall won’t persist and neither will the rate of change in tide level.
Why not?
Beaufort shows a similar pattern and is right next to open water. How do you know what the rainfall is going to do in the future? Are you referring to any model with that kind of regional precision?
Indeed, Beaufort is closer to open water, but not on it. Measuring Tamino’s plots, I get that Beaufort has departed from the early linear fit by 101 mm in since 2010. Wilmington has departed from the early linear fit by 147 mm since 2012. The departure is about 50% greater for Wilmington in about 4/5 the time. I don’t claim that there has been no acceleration, just that rainfall seems to explain a lot of it at Wilmington. Have you charted the 3-yr averages I suggested earlier?
All other things being equal, as the surface warms due to forcing, there will be more water vapor aloft, per Clausius-Clapeyron. Accordingly, in places with wet weather, it’s a good bet there will be more rain. It is also likely these will come in big bursts rather than steadily, but why that happens is a more complicated thing.
@ecoquant: To keep the tide rising at the current rate, rainfall can’t just stay at its recent level it must continue increasing. The average annual rainfall at Wilmington was 48.5″ for 2011-2013 and 78.9″ for 2016-2018. If you want to bet that the average for 2021-2023 will be at least 109″, I will gladly take the opposite side. I have a head start, the total for 2021 will probably be less than 60″
Ah, I see. Rainfall will probably increase but it cannot increase at a pace sufficient to explain the rising tide.
Here’s a plot that shows the strong correlation between annual rainfall and tide level at Wilmington, NC. I used data from 1940-2012 to detrend both rainfall and tide data. To reduce noise, I averaged over three years.
