Sea Level: Gridded Average

I’ve formed a gridded composite sea level estimate and I’d like to share it. It’s quite crude, but some compensation is necessary because a simple average of stations is dominated by Europe and North America, there are so many tide gauge stations there. My regional breakdown was a first step; this is the next.

I divided the world into 30° x 30° latitude-longitude grids, and within each, aligned all the tide gauge stations with at least 360 monthly values since the year 1900. Then I aligned all 47 grid series (which actually had data), area weighted, to get a global estimate:

My new estimate shows some of the fluctuations seen in the Church & White data, but its long-term change is more like that from Dangendorf et al. Here’s my new estimate (yearly averages as red x’s), that of Dangendorf et al. (yearly averages as blue triangles), and Church & White (yearly averages, green plus signs):

Here’s how the rate of sea level rise has changed (again, my estimate in red, Dangendorf et al. in blue, Church & White in green):

Of course I applied PCA to the residual series from the grids, and the first PC isolates the Arctic Ocean boundary. The second PC is again proportional to the el Niño pattern (el Niño index in red):

I’m pleased with the new gridded composite. It has some similarities to the Church & White data, some to Dangendorf et al., certainly all of them show an interesting pattern of rate changes throughout the 20th century, with the fastest rates of sea level rise being the most recent.

My new estimate is a toy, not a credible addition to the current research. But its utter simplicity while agreeing with the best published results, flatters both, especially my one possibly useful contribution, how to compensate for vertical land movement. And, it suggests that perhaps we’re converging on a more reliable history of sea level over the past century and more.

Thanks to everyone for very kind donations to the blog. If you’d like to help, please visit the donation link below.

This blog is made possible by readers like you; join others by donating at My Wee Dragon.


4 responses to “Sea Level: Gridded Average

  1. A thought here, which I now have a few moments to write about. In active seismic work for mapping geostrata, receiving transducers are placed as close to uniformly spaced as is typically possible, but the spacing is never perfect. Receptors outnumbers producers of sonic energy by a lot, but the producers are widely spaced as well.

    It’s easier to form a coherent image when the receivers are uniformly spaced, so they do an intermediate step called migration which systematically transforms the multivariate series of receipts at the k^{th} receiver to a set of receivers with similar characteristics but fictional uniformly spaced placements. The analysis is then done assuming uniform spacing. The migration is essentially a kind of interpolation, and splines are often used, typically B-splines or penalized splines (“p-splines”). Alternatively, it’s possible to use straight physics to calculate the perturbations of signal needed to estimate the reception at an offset point, although that depends upon how well the character of local rock is known.

    This idea can be used in many other situations. I once was able to re-create the effects of RF energy illuminating a target-under-test by moving illuminators using a simulator which otherwise was constrained to having either moving illuminators or moving target by essentially perturbing the place-in-space navigational time series to the computing subunit simulating each in a systematic way. This generalized to orientations. Such manipulations can also be used to ascertain the variogram range of a response due to perturbations in estimates of location, and by coloring the perturbations along all the covariates appropriately, the performance of response in this multidimensional space can be characterized.

    This often reveals imperfections in response which would otherwise be missed by local Taylor linearizations.

  2. I really like this piece of work. So often in global warming models and analyses, the major features show up in the most basic analysis/model. One of the great strengths of climate science is the fact that sophisticated models/analyses differ very little from the basic.
    As a relative layperson, this gives me great confidence in the underlying science/statistics.

  3. It is really interesting for me that Tamino decided to use this gridding concept.

    Because the predominance of mainly US and European stations in the historical part of measurement data sets imho distorts nearly any time series.

    If you consider for example the GHCN daily data set with currently about 40,000 temperature measurement stations all around the world, you immediately see upon a look at the file containing the station list that nearly a half of them is located in CONUS.

    And if you distribute them among a 2.5 degree grid during a run over all stations active some time, you see furthermore that of the 100 grid cells encompassing the most GHCN daily stations, again 96 are in the US, and they totalize nearly 16,000 stations! The top grid cell has 364, and its center is here:

    It is somewhat evident that if you don’t perform any gridding, 22,000 stations outside of the US will have, in the global average, to compete with 18,000 US stations, what lets the Globe look like US’ backyard.

    No wonder: over 1,700 grid cells contain less than 5 stations, and over 800 (i.e. about 33 %) only one.

    Applying this gridding then lets about 200 US grid cells compete with some 2,500 cells worldwide, fair enough.

    The same is valid for the PMSL data set: the topmost grid cell near Vancouver and Seattle

    encompasses 19 tide gauges, and the 20 topmost cells encompass 256, most of them located in US, Europe, Japan, Australia.