What is sea level up to lately?

Since 1993 we’ve been monitoring sea level with satellites. The overall rise is obvious (it was obvious even before 1993), and we now know that the rate of rise is faster than it has been in a very very long time. But one might wonder, what has it done lately?

I retrieved global sea level data from NASA, and if you’re a regular reader you know I love graphs:


A lot of the wiggling around (and there is a lot of that!) is due to the annual seasonal cycle, and there’s also a cycle that’s related to the satellites’ orbits. If we want a clearer picture of how the sea level trend is progressing, we should remove those; fortunately NASA also provides smoothed sea level anomaly (I’ll even include a straight-line fit, which estimates the average rate for these last 24 years or so: rising at 3.4 mm/yr).

It’s good to know the overall average rate, and it’s obvious that even with the seasonal and orbital cycles removed it still wiggles around — a lot. But has it done anything else trend-wise? Is there any departure from that simple straight line which we can have some confidence in, some valid evidence that it’s not just random wiggling around that looks like something meaningful but isn’t really?


Some look at these data in an attempt to find something, anything, they can cherry-pick to claim that either global warming’s effect on sea level isn’t happening, or that we should look at it as “no problem.” A classic example happened nearly 10 years ago, when Danish climate “skeptic” Bjorn Lomborg wrote this in the U.K. newspaper The Guardian:


“Over the past two years, sea levels have not increased at all — actually, they show a slight drop. Should we not be told that this is much better than expected?”

Let me answer that question: No, Bjorn, we should not be told a lie.

I say that because sea level is so “noisy” (meaning, it wiggles around so much) that even with a steady trend, a 2-year period can easily show no rise because the noise happens to be going down enough to cancel the trend continuing up. No, Bjorn, this was not “much better than expected” — episodes like this are expected from time to time on a regular basis. Roll the dice often enough, sooner or later you’re gonna get snake-eyes.

Poor Bjorn. He was roundly (and rightly) ridiculed for his comment, perhaps most pointedly years later in a graphic from Greg Laden’s blog when Lomborg spouted more nonsense about sea level:

You’d think that would be the end of it. Who would be dumb enough to try the same trick?

Anthony Watts, that’s who. The identical strategy was “Bjorn-again” when Watts gave us a wonderful post revealing what he calls a “pause” — at least as far as global warming is concerned. He declares a “pause” in sea level rise by doing exactly the same thing Bjorn Lomborg did.


Exposing nonsense, like that from Lomborg and Watts, can be amusing. But it doesn’t answer the question: what has sea level rise been up to, on the global scale, lately?

I prefer not to use “smoothed” data for analysis; if you’re into statistics you might know that it dramatically increases the autocorrelation of the data, which makes analysis a whole helluva lot trickier. So, I removed the seasonal and orbital cycles myself in order to generate anomaly values without that extra autocorrelation. And here it is:

Then I went looking for patterns (other than the straight-line increase) that could be claimed to mean something, not just suggestive-looking noise. I started by fitting a smooth to these data, not to analyze the result but just to get ideas by illustrating what changes might have occured. Here’s the smooth in a form I find rather attractive:

It looks like maybe, just maybe, it slowed down a wee bit around 2005 and sped up again about 2011. So I returned to the original data and tried modelling it with not one, but three straight line segments. I found the “turning point times” which fit best, and tested their statistical significance. Lo and behold, they turn out to be real and the model fit looks like this:

How much did it slow down, and how much speed up? We can use both the three-lines model and the original smooth I applied to estimate the rate of sea level change and its uncertainty (three-line model in blue, smooth in red):

Bear in mind that these estimates aren’t of instantaneous rate; the blue lines represent the average over many years for the time segments they cover, and the red line also represents an average over many years as our “window” on the data moves through time.

One final note: I will admit doubt that these results indicate actual trend changes, for several reasons. First, the autocorrelation in the original data (smoothed or not) is fierce, so the analysis is far from easy. Second, there are other factors which can cause temporary ups and down that last for years, such as the el Nino oscillation and the movement of water from ocean to land and back (often when extreme floods inundate the land, the water eventually finding its way back to the sea).

To sum up: I regard the changes indicated by the three-lines model to be real, but I’m not certain they reflect long-term trend, they could be a manifestastion of the kind of noise that lasts a few years — leading to apparent trend — but never really gets anywhere because it’s not really part of the trend.

Perhaps we’ve already seen recent strong acceleration in sea level rise, but the aforementioned caveats mean we should wait for further data before stating firm conclusions. But the danger — the extreme danger and cost from sea level rise — means that we should absolutely not wait for further data to begin getting ready for what’s to come, and taking steps to make it as painless as possible.


This blog is made possible by readers like you; join others by donating at Peaseblossom’s Closet.

Advertisement

46 responses to “What is sea level up to lately?

  1. It seemed to me that the pattern was a rise with each El Nino, and then a “pause” on the downhill side. There is 2013/4 which doesn’t quite fit, but may be related to the fitful start to the 2015/6 event.

    • Doc, Morano has been so busy interviewing dead philosophers of late to bother with data at all.

      • russellseitz,
        (Your reply to Doc Snow appears a little out-of-place on the thread.)
        You link to a bit of fun aimed at denialist Mark Morano whose poor writing can be interpreted as saying that the late-&-great Carl Popper is speaking from beyond the grave to support climate denial. But do you & VatsUpMitThat both miss a more substative point?
        The substance of Morano’s post is entirely based on an article by a Milan Bharadwaj. Indeed, denialist Morano is simply re-posting Bharadwaj. So what is Bharawaj saying?
        Firstly Bharadwaj shows he neither understands the philosophy of Carl Popper (whose quotes include “No rational argument will have a rational effect on a man who does not want to adopt a rational attitude” which explains AGW denial rather well.) nor understands why astrology is non-scientific. And then he shows annoyance at the plethora of accounts he meets blaming so many bad things on AGW and suggests this shows this aspect of AGW cannot be disproved by “counterexamples” of “weather patterns” (although Bharadwaj makes clear he agrees that rising CO2 is warming the planet).
        And then Bharadwaj demonstrates he is a conspiracy theorist linking to contrarian Telegraph columnist Christopher Booker as evidence that “modern day climate science has incredible amounts of data tampering.” And if that wasn’t evidence enough to ensure that only a moron should be re-posting the words of Bharadwaj, a conspiratorial rant by a Dane Wigington is also presented as evidence of this dreadful data tampering which apparently involves secret geoengineers who are already at work fiddling with the climate and trying to play down the real level of AGW, apparently, according to the shape-shifting lizards.
        So this is all highly entertaining and anybody trying to present this Milan Bharadwaj as somebody with a serious message on AGW would have to be knuckle-draggingly moronic. But after many years, I am still undecided – is Climate Depot somebody’s idea of a serious message on AGW? Or is it actually a spoof?

  2. Deniers deny. Bjorn Lomborg is sometimes represented as ‘moderate’ on AGW, i.e. neither a denier nor an alarmist. If he disputes any of the following statements:

    1. GMST has risen by at least 1.0 degree C since the industrial age began, and is now rising at around 0.18 degrees C per decade;
    2. The trend of GMST since WWII is entirely anthropogenic, the result of a drama of the global commons;
    3. The drama is already a tragedy for tens of thousands of individuals, and the higher GMST rises, the more tragic it will be, measured in homes, livelihoods and lives;
    4. The tragedy of AGW will fall most heavily on those least buffered by prosperity, although virtually everyone will pay the cost one way or another;

    why then by dog, he’s an AGW-denier. If all he’s got to support his case is cherry-picked short-term ‘random’ (i.e. internal) variation around the observed long-term trend of SLR, he’s a simple-minded AGW-denier. That hardly requires a college degree! Why is this guy so well-known?

  3. I believe that big dip in 2010-11 can be largely accounted for by the series of downpours that flooded large parts of Eastern Australia. http://bit.ly/2gHuKIp
    One farmer interviewed on TV said he could handle once-in-a-hundred-year floods, it;’s just that there had been three so far that year.

  4. The 2008 Guardian article by Bjorn Lomborg is here for those who’d like to be reminded how climate denial looked like a decade ago.
    Even back in 2008 the OHC “dropping for the past four years” was a pretty obvious cherry-pick of data (also suffering calibration woes) and I like the Arctic comments made just a year on from the big 2007 melt season. Apparently things were going to be all okay. Scare stories like “the Northwest Passage was open for the first time in recorded history” are entirely exaggerated because the BBC reported it open back in 2000 (presumably for the first time in recorded history although the BBC say only that people have been looking for 400 years).
    And the good-old ‘pause’ wasn’t so well established back then so we get (complete with editing correction) a rather nebulous “Temperatures in this decade have not been worse than expected; in fact, they have not even been increasing. They have actually decreased by between 0.01 and 0.1C per (year, crossed out) decade,” this presumably originally the measured rate at the bar between sunset and pub closing time on the day in question. Or some other cherry-pick. (The RSS & UAH of the day Jan 1998-Jun 2008 would have yielded the two numbers printed, although Goddard/Heller wasn’t brave enough to put the numbers on it.)
    And with the item titled ‘Let the data speak for itself ‘, the absence of graphs, and thus the absence of speaking data is pretty telling.

    • Climate denial ten years ago looks pretty much like climate denial today, from what I’ve seen lately. I spotted my first instance of ‘no warming for 20 years’ on a FaceBook thread yesterday. Like the first Blue Wren of spring appearing in our hedge.

      • crispy2058,
        As you say, the zombi denialist meme lives on. Interestingly, it was entirely untrue when it all began twelve years ago. Except for blatant cherry-picking, there was no reason for the likes of Bob Carter to set out their nonsense in 2006. The El Nino of 1997/98 aside, global temperatures were still accelerating up to 2007. (For instance, see this graphic – usually 2 clicks to ‘download your attachment’, or here – ditto.) So if denialists like Carter were not oblvious to the ignominy of being forehead-slappingly wrong, the meme of the “period of temperature stasis” (as it was called then) would never have been established ready for the years of La Nina 2008-12.

        Yet there is one thing we can be sure of in all this – climate change deniers have no problem whatever in being pretty-much continually forehead-slappingly wrong.

      • Al, you mean to say that Marc hasn’t updated his coverage of global temperatures “in freefall” lately? I’m shocked–shocked, I tell you!

        Apparently, the ‘freefall’ hit a powerful updraft… as I know you are well-aware.

        Maybe I need to update my “When Did Global Warming Stop?” article.

      • In fact, I did incorporate the Morano silliness in a new update. In case anyone cares–or is unfamiliar with the history–the updated article is here:

        https://hubpages.com/politics/When-Did-Global-Warming-Stop

  5. I seem to recall analysis suggesting that the early 1990s rate of sea level rise was “artificially” inflated by the recovery from Pinatubo… if we correct for that, I wonder what that would do to your blue/red analysis?

  6. Good to see another post. Interesting that the real estate site Zillow–not known for political action–is assessing projected SLR losses. (I’ll try to post a link later.) It suggests that awareness is beginning to permeate the mainstream.

  7. Tamino – what happens to your trends if you (attempt to) remove the MEI influence first? It seems rather well-correlated to the SLR record according to:
    http://sealevel.colorado.edu/content/2016rel4-gmsl-and-multivariate-enso-index
    and whilst I’m not sure if anyone properly understands the mechanisms it doesn’t seem implausible that the two are physically related and that the MEI is part of the long-term noise. Does that make sense?

  8. Both the current NASA graph and the current AVISO graph show sea level remaining above the satellite-era trend line, 3.4 mm/yr for NASA and 3.9 mm/yr for AVISO, for the longest number of months in a row in the satellite era. A lightweight La Naña may end that, though the last one did not, but it seems to me that the current situation is about what one would expect while being on the cusp of an acceleration in the rate of sea level rise, which is what the recent Fasullo-Trenberth paper claims.

  9. Everett F Sargent

    Tamino,

    Your 1st NASA link points to this (your current) blog post. I think that the correct NOAA link should be to this directory …
    ftp://podaac.jpl.nasa.gov/allData/merged_alt/L2/TP_J1_OSTM/global_mean_sea_level/

    I also use this directory for SMB …
    ftp://podaac-ftp.jpl.nasa.gov/allData/tellus/L3/mascon/RL05/JPL/CRI/mass_variability_time_series/
    (it lags the GMSL by several monhs though, have been unable to find anything nore in sync with the above NOAA GMST record).

    Not sure what is up with the CU GMSL record though, it’s now ~one year since any updates (rumored Nerem paper and/or Jason-2/Jason-3 calibration issues, but I really have no idea).

    BTW, good post and glad to see you back.

    [Response: Thanks for the correction.]

  10. I’m always interested in what physical phenomena are causing the “noise”, because they must be there. More and less water on land (and in the atmosphere) would mean actual amount of water in oceans changing. Do I correctly recall Grace satellite data showing (some of) that? Ocean heat content also comes to mind; ocean volume change due to average ocean temperature changes. Heat transfer from ocean to atmosphere to land and back should all add up, but variations in cloud cover would cause variations in the total heat content. Together, do they account for the wiggles or are there other physical processes?

    • Ken Fabian,
      Yu ask “Together, do they account for the wiggles or are there other physical processes?” See IPCC AR5 Fig13.06

    • There was a paper published a few years ago The rate of sea level rise which broke things down a bit. They fed weather reanalysis data into a hydrological model to get land-sea exchange mass estimates + used the reanalysis data to find mass change due to atmospheric water vapor storage. They also used ocean temperature measurements to determine thermosteric variations.

      Together those seem to explain the main noise features, particularly since 2007 when the global ARGO network became fully realised for measurements down to 1500m.

      In the supplementary information they also show a breakdown with a GRACE estimate for the mass component from 2003, which is very similar to the modeled expectation.

  11. I prefer not to use “smoothed” data for analysis; if you’re into statistics you might know that it dramatically increases the autocorrelation of the data, which makes analysis a whole helluva lot trickier.

    Well …, to my mind, that’s a bit of a red herring. It makes smoothing out to be some kind of bad practice, and that’s not at all the case, for it often reduces mean squared error while accepting a certain predictable and calculable (and, so, removable) bias. There are many reasons to want to introduce autocorrelation or to remove it. For example, spatially distributed temperatures over time show less correlation than do temperature changes, and that’s one reason climate studies look at temperature changes.

    But, also, there are a variety of useful techniques for doing regression which assume at the outside that a response is some smooth function of known predictors, and the smoothing of the predictors’ signal is used as the coupling to the response function. I am principally thinking of generalized additive models, but there are others, too, like the kriging (or, the spatial version of Best Linear Unbiased Estimation) method which BEST’s incredibly productive Zeke Hausfather used to good results there.

    Facts are, many indices of scientific interest are latent variables. If they are highly impulsive, it isn’t likely one has the fortune of having a dataset which permits their point estimation at high fidelity. More likely, estimating their time-varying mean is going to be a reasonable thing, along with, of course, an estimate of their time-varying variability. If they are not impulsive, well, then, a smoothed estimate is exactly the ticket.

    I also think, after living with them for a while, the general notion of trends is oversold. The reason is that using time as a predictor variable tempts all kinds of explanations and analyses involving endogeneity, and There Dragons Lie. I almost would rather regress Sea Level Rise against lags of atmospheric CO2 concentrations, teleconnection indices in weather (e.g., NAO), and mean measured oceanic temperatures at 2000 meters depth than time itself.

    [Response: As often happens, we’re in far less disagreement than it might seem to some observers. In fact, we might not be in disagreement at all. I would suggest that we should add Robert Rohde’s name to Zeke Hausfather’s when crediting the application of Kriging to global temperature estimates.

    You specifically might be interested to know that I’ve been reading a lot of Jim Berger’s work lately … and this old frequentist is on the verge of becoming a full-fledged Bayesian. It’s remarkable sometimes how you can go from “that can’t possibly be right” to “that is obviously right” after immersing oneself. I will vow not to become anti-frequentist (at least, never to make it personal).]

    • skeptictmac57

      “It’s remarkable sometimes how you can go from “that can’t possibly be right” to “that is obviously right” after immersing oneself. I”
      That reminds me of the Monty Hall Problem. Almost everyone thinks the former until they finally ‘get it’, then they can’t understand why no one else ‘gets it’.

    • Tamino: …this old frequentist is on the verge of becoming a full-fledged Bayesian. It’s remarkable sometimes how you can go from “that can’t possibly be right” to “that is obviously right” after immersing oneself. I will vow not to become anti-frequentist (at least, never to make it personal).

      Heh. I took Biometry 501 from Robert Sokal in 1985. Our text was Sokal & Rohlf, 2nd Edition. Bayes isn’t mentioned until the 3rd edition, published in 1995. I changed careers soon after getting an ‘A’ in the course (post hoc, sed non propter hoc), so I can’t say I was much invested in frequentist approaches, but I didn’t exactly have a Bayesian moment either. I became a ‘full-fledged’ Bayesian in the last couple of years, when I saw how much more appropriate Bayesian methods are for analyzing climate data, where prior probability distributions are well-informed.

      Statisticians, like mathematicians in general, should feel free to adopt new tools that work better than the old ones, but I know it doesn’t always work that way 8^}.

      [Response: I strongly urge my colleagues to try some new tricks. It’s hard, but I’m one old dog who managed.]

    • I attest the influence of colleagues. It was a full-fledged Bayesian down the hall who lately harangued me into paying attention to Bayesian methods for climate data.

      So, as an armachair, once-wannabe-professional ecologist, I think Mann, Lloyd and Oreskes 2017 is persuasive. OTOH, the ‘Springboard Commentary’ by Stott, Karoly and Zwiers in the same issue of Climatic Change takes an opposing view. What’s a retired non-expert to do but wait for a consensus of actual experts to emerge? I’m quite happy to leave that to the professionals 8^}!

    • @Tamino,

      Regarding

      I would suggest that we should add Robert Rohde’s name to Zeke Hausfather’s when crediting the application of Kriging to global temperature estimates.

      Very much agree on Robert Rohde. His name slipped my mind.

      It’s odd how techniques get stonewalled away from others in even fields like geophysics. I mean, not only did kriging originate in mining work, but Glover, Jenkins, and Doney devote a full chapter (7) to these methods in their Modeling Methods for Marine Science, Cambridge University Press, 2011.

      Very glad to see the Bayesian methods being useful! But, as you see, Tamino, from Berger’s work and that of Christian Robert, these things don’t stay still, either. The other guy who’s applied a lot of Bayesian methods to geophysics is Mark Berliner (there with colleagues Milliff and Wikle) and they have a lecture on YouTube titled “Bayesian approaches to the analysis of computer model output” dating from 2014.

  12. KF…

    Sometimes there is more predictability to be gained, true. But sometimes there is not. The noise in a system which has many chaotic inputs may well be unpredictable in practice once the major variation has been extracted, The illusion that 100% predictability is even available to my mind is a holdover from 19th century deterministic physics. Physicists sensibly gave up that notion long ago.

    Along those lines, I’m even a bit uncomfortable with tamino’s examination of the data and then performing a 3 line fit. He properly caveats the procedure, and it certainly can be justified on the basis of apparent increasing variance in recent decades which he has mentioned in the past. However, the possibility of cherrypicking errors from post hoc analysis is certainly there (as he points out).

    That said, in its favor, it does give one a reason to look for factors which might explain that apparent variation in future studies. It’s certainly suggestive.

    • I wasn’t chasing predictability so much as a breakdown of what processes and how much they contribute – after the fact. Which can still tell us stuff about what we can expect in the future. And I’m interested how much is left over after estimations of the contributions of known phenomena.

      Statistics seems likely to tell us a lot about the range, strength and frequency of those phenomena

  13. I thoroughly enjoyed this post and the comments. One thing that interests me but wasn’t brought up: might there be more variation around the trend as time moves forward. Not sure if a proper analysis would show it historically, but I think it makes sense — more water in the atmosphere which, with longer droughts and bigger rainstorms, could come out more episodically onto either land or ocean (as either rain or snow). If sea level rises faster but with more noise in the future, that has implications for (i) early detection and (ii) preparedness. And I think the ‘trend’ is that consequences of such an outcome will tend to be worse for people (easier to deny accelerating increase, and more likely to be unprepared if the eventual horrible event wasn’t foreshadowed in a previous year).

    • While these are sea anomalies, and, therefore, are not constrained to be positive, and my comment is not based upon anything having to do with physics, it is interesting to note that observations of a positively constrained quantity drawn from a theoretical distribution generally can have greater variability as the mean increases, without any physics. It depends upon the specific distribution and, to some extent it’s parameters, but consider a Gamma. (The Erlang, exponential, and \chi^{2} are special cases of the Gamma.) That’s a distribution positively constrained, typically parameterized with a shape, \kappa and a scale, \theta. The mean of a Gamma is \kappa \theta, and the variance is \kappa \theta^{2}.

      Accordingly, if one has two means, \mu_{1}, \mu_{2}, such that \mu_{1} < \mu_{2}, with, say, \kappa fixed, then \theta_{1} = \mu_{1}/\kappa and \theta_{2} = \mu_{2}/\kappa. So, also, \theta_{1} < \theta_{2}. Consequently, \kappa \theta_{1}^{2} < \kappa \theta_{2}^{2} and so the variance increases as well.

      • So I guess this means that variance in the rate of sea level rise should be standardized by the rate. For example, the coefficient of variation in rate of rise should be used to compare different periods with different mean rates of rise? Or should the data just be transformed (log or square root) before variance around the trend is estimated?

      • (I wish there was a “preview” button. I even tried this at my blog first, but, then, I copied the wrong version back.)

        @Steve Latham,
        Well, as techetypes are wont to say, “It depends.” Consider the case of the Gamma again. The c.v. is defined as \frac{\sigma}{\mu} times some positive constant (often 1 or 100). So, for the Gamma, that would be \frac{\sqrt{\kappa \theta^{2}}}{\kappa \theta} = \frac{\sqrt{\kappa} \theta} {\kappa \theta} = \frac{1}{\sqrt{\kappa}} which is a constant if we assume constant shape. I don’t think that’s what’s intended.

  14. Great post! I have a Facebook friend who is an engineer for a major avionics company and specializes in radar. He says that the resolution for the radar band being used is only 1cm and thus all of the mm radar altimeter data is worthless. I can not find material that specifically addresses this. Can someone point me in the right direction?

    P.S I do understand that the laser altimeter and tide gauges also agree with the radar measurement but he is hung up on the 1cm resolution argument.

    • Okay. I am no radar expert, although I have done calculations and about them as a signals processing engineer back in the day. I’m sure today’s radars are better.

      In any case, there are direct explanatory references available here, here, here, and here. The latter gives technical details of the TOPEX/Poseidon altimeter. Radars can be run in many ways, depending upon modulation and what’s needed. In addition to radar resolution, there are other uncertainties which need to be calibrated out, such as platform position (in this case, relatively easy), and, in this case, tides (not so easy). To summarize a complicated thing simply, I’d say that long term averaging — meaning, collecting repeated measurements and, so, getting better estimates of mean value using Central Limit Theorem — is what’s at work here.

      However, I’m knowledgeable enough of the field that I could, for instance, read the details of the TOPEX/Poseidon paper and provide a summary, either here or at my own blog, if it’s wanted. I’ll only do it on specific request, however, since it will take time which I would normally devote to other non-work studies I wanted to do.

      Let me know.

  15. @Trent1492,

    You kid, right?

    • I am not kidding. I am a novice at this stuff myself so I know I can not give a competent answer.

      [Response: That was my impression.

      To others: I know that deniers will show up and feign ignorance so as to derail discussion or argue over reality. I suggest that we always give the benefit of the doubt — real trolls cannot keep it a secret, we find out soon enough. And as valuable as it is to rebuff trolls quickly, there is far more to be gained from helping the sincere.

      My readership tends to be exceptionally knowledgeable; you should be proud. If this were to become known as a refuge for those with confusion and honest questions, however ignorant, that’s something to be even more proud of.]

      • Trent1482,
        You ask for “material that specifically addresses this” (ie the accuracy of TOPEX/Jason SL measurements/calculations). I am not aware of such “material”. But suffice to say that your “Facebook friend who is an engineer” is correct and actually under-estimates the inaccuracy of each TOPEX/Jason measurement. When TOPEX first started operating, the ‘requirement’ for such accuracy was 13.7cm and they were very happy that they achieved 4.7cm. (See here.) Since that time the measurement errors have been halved (Jason3 accuracy described here as “about an inch”) but that is still greater than the 1cm quoted by your Facebook friend, although she/he is likely unaware of the many corrections required to obtain that accuracy.
        That said, I know of no “material” that describes (in the context of TOPEX/Jason) the conversion of a single SL measurement into a 10-day average with a sub-cm accuracy. The best I have seen is less-than-well-phrased assertions such as from Steven Nerem “The accuracy of the altimeter measurements after applying these corrections and models is about 1-2 cm for a point measurement along the satellite groundtrack, but 10-day averages of these measurements to compute global mean sea level are generally accurate to about 4-5 mm due to the reduction of errors in the averaging.” But your Facebook engineer should be able to understand that if you measure the same thing for 10 days +/-2.5cm, you will have a very good idea of where the +/-2.5cm is being centred on and thus why it is entirely senisible (and in no way “worthless”) to accept that a value can thus be derived with mm rather than cm accuracy.

      • In addition to the benefits of averaging when errors are random, you also need to keep in mind that a measurement can have both random and systematic errors. With a fixed bias, differences can be calculated more accurately that the basic accuracy of a single reading.

        Even if the raw output of the radar has limited resolution, averaging noisy data can give results with greater accuracy than the limited resolution.

      • Michael Sweet

        This question is similar to the measurements of temperature. In the US the temperature is measured to the closest 1F. The anomaly measured is reported to 0.01F. My understanding is that when they average thousands of data points they obtain much more accurate averages than the value of any single point.

        Don’t physicists do the same thing with their measurements of elementary particles? They get many measurements and the average is the accepted value. The confidence of the accepted value is much lower than any single measurement.

      • I want to thank everyone who has given an answer. I have been given a lot of material to read. I intend to have at least perused it all by late Tuesday.

        [Response: You might enjoy the illustration in this post.]

      • As others have said, your engineer friend is correct about any one point measurement at any one instant but is apparently unaware of statistics.

        Have your friend consider the results of taking repeated measurements of some particular distance with his radar system that are accurate to 1 cm. What will he find? He will find a distribution of values. Next ask him where in that distribution the “real” value (as measured by some more precise system) lies if he had to bet. He really ought to answer at the mean.

  16. Thanks to @Michael Sweet, @Bob Loblaw, @Al Rodger, in reverse order of priority. The only other thing I’d note is that if one goes to the other extreme, and averages too much, aligning the series of data at a geographic point at that point (an “exact repeat mission”), you may “correct out” things like tides and waves, but you are left with the residual effects of gravity on ocean surface, an effect which permitted GEOSAT to be used to recover ocean seafloor topography, because that ocean floor leaves a consistent gravitational effect at the ocean surface above. (Details.) In other words, not only was the ocean surface height measured, but it was measured with sufficient accuracy that the gravitational effects upon the ocean above of ocean floor topographic features would be obtained through inversion.

    • hypergeometric,
      The TOPEX/Jason data is certainly impressive. Yet there is still the potential for a bit of a shake-up on the SLR front.
      The proposal set out a few months back that the early years of TOPEX over-estimate the SLR would, if confirmed, result in the appearance of acceleration within this data at about 1mm/yr^2. (Without this adjustment, you could perhaps do a simplistic back-of-the-envelope calculation, comparing the 1998 & 2016 El Nino years to suggest an extra 6mm of SLR over the period above the linear, suggesting an acceleration of perhaps 0.3mm/yr^2 hiding in the data-as-graphed-in-the-OP.) I’ve not read the papers proposing this adjustment to TOPEX data but the graph from this Steven Nerem’s webpsge suggests the acceleration is plainly evident.