Foolish Line

The Foolish Line

Anthony Watts is on a very long list of those who ridicule the threat of sea level rise. As many others have done, he shows a tide gauge record (for Sewell’s Point, near Norfolk Virginia):

sewells_point_tidegauge_8638610

He also shows the data from Portsmouth, VA, although it is of shorter duration and doesn’t go past 1990. Then he claims that there is no discernable acceleration in either data set and declares that the rise is linear:


The most important thing to note is that unlike the steeply vertical graph in the WaPo article showing up to 8 feet of projected sea level rise, there is no acceleration visible in either of these two tide gauge graphs. They illustrate the slow, linear, subsidence that Nature has been doing for thousands of years.

Note that he also blames the rise on subsidence — the sinking of the land — which is a real factor in this area but is not the only reason sea level is rising so fast there. That’s “Uncle Willard” for you.

Then Watts declares that he’ll “do the math”:


So, let’s do the math to see if the data and claims match. We’ll use the worst case value from Sewell’s Point tide gauge of 4.44mm/year, which over the last century measured the actual “business as usual” history of sea level in concert with rising greenhouse gas emissions in the atmosphere. with no “mitigation” done in the last century of measurements.

Their claim is for the “business as usual” scenario: “by the end of this century, the sea in Norfolk would rise by 5½feet or more.””

1. At the year 2014, there are 86 years left in this century.
2. 86 years x 4.44 mm/year = 381.84 mm
3. 381.84 mm = 15.03 inches (conversion here)

Apparently Uncle Willard’s idea of “doing the math” is: arithmetic. I’d say it’s rather revealing that one of the things he felt the need to “document” is how to convert mm to inches. Such mad math skillz!

One of the real shams behind his extrapolate-the-linear-trend-to-the-year-2100 method is that there is acceleration in this tide gauge record. But far too many people, including a lot of researchers, have missed it because there’s also deceleration.

Here’s the monthly average data for Sewell’s Point from the Permanent Service for Mean Sea Level, with the seasonal cycle removed by a 4th-order Fourier series:

monthly

We’ll start by converting monthly data to annual averages. This too will remove the seasonal cycle, and it will greatly reduce the autocorrelation of the data, making our statistical tests much more accurate.

linear

I’ve included the best-fit straight line (by linear regression), which suggests an overall rise rate of 4.56 mm/yr. But — is the trend really linear?

Many have tried to find acceleration by fitting, not a straight line, but a parabola (a 2nd-degree polynomial). That suggests a very slight acceleration which is not statistically significant. Does that mean Watts is correct, that there’s no discernable acceleration?

The problem with a parabolic model is that it is a constant-acceleration model. Maybe the acceleration isn’t constant. Maybe we need a more complex model to find it with statistical significance. Let’s try all polynomial degrees from 1 (straight line) to 10 (10th-degree polynomial) and use AIC (Akaike Information Criterion) to estimate which is giving the best fit when compensated for the extra degrees of freedom:

aicvals

The “winner” is the model with lowest AIC, which in this case is the 3rd-degree (cubic) polynomial. Here’s what it suggests is the trend in this tide gauge record:

cubic

Interesting! This model (which, by the way, is statistically significant even after correcting for autocorrelation) suggests deceleration early and acceleration late. Of course it’s only a model and maybe (almost certainly in fact) not the best one, but it does prove (in the statistical sense) one thing: that the trend is not a straight line. It’s not. Claiming that it is, is foolish.

What’s far more foolish is using such a model to extrapolate, not just to next year or the next few years, but all the way to the end of the century. Foolish.

Suppose I used the cubic model (demonstrably better than the linear one!) to extrapolate to the end of the century? That model predicts that sea level will rise between now and the end of the century by over 2.6 meters. Yes, that’s meters. Over 2600 mm. Over 100 inches. Over eight and a half feet.

But, honestly, it’s not valid to extrapolate this statistical model to the end of the century. Prediction is hard — especially about the future — and extrapolating simple statistical models far into the future is a very poor way to go about it.

But it seems to be the favorite way to forecast future sea level rise by those who deny the reality, human causation, and/or danger of global warming. Not just Anthony Watts, but the North Carolina state legislature. It’s no surprise that when they do, they choose a statistical model which gives a low forecast: a straight line.

It’s the basis for the “line” that future sea level rise is not going to be much of a problem. I suspect that despite scientific evidence to the contrary, despite the best efforts of actual experts, they will continue to toe the line. A foolish line.

56 responses to “Foolish Line

  1. Reblogged this on Hypergeometric and commented:
    Very fine.

  2. Having read Lori Montgomery’s WaPo piece, and the dismissive screed by Uncle Willard, I was hoping that Tamino would wield his sword to slice the Gordian knot of WUWTisms. And I am not disappointed. Skillz indeed.

  3. I’ve been having running battles about this with people who quote much more sophisticated literature, for example, Trends and acceleration in global and regional sea levels since 1807, S. Jevrejeva a,b, J.C. Moore a,c,d,⁎, A. Grinsted a,e, A.P. Matthews b, G. Spada (Global and Planetary Change 113 (2014) 11–22)

    They use the abstract (but the conclusion isn’t much better) to make the claim that there’s no acceleration, and that this falsifies just about everything to do with AGW.

    BTW, your old friend Scaffeta is up to his tricks with ACRIM., this time asserting that TSI actually increased from the 60s to 2000.

    http://link.springer.com/article/10.1007/s10509-013-1775-9

    Your 2007 posts are hard to get at from the wayback machine, so I have no idea what your approach to this was.

    • There are two separate issues:

      1) Has acceleration occurred?
      2) Should we expect to see acceleration in observational records if AGW has occurred?

      Answering the second question, Gregory et al. 2013 provides some good insight.

      Focusing on the contribution from ocean thermal expansion first, models do suggest anthropogenic-only forcing should have warmed the oceans at a noticeably faster rate in the second half of the 20th Century than the first, with thermosteric SLR averaging about 3x greater for 1950-2000 compared to 1900-1950. However, the timing of large volcanic eruptions – busy at end of 19thC/beginning of 20thC, lull until latter part of 20thC – means our expectation of actual acceleration of this component should be substantially reduced – their calculations indicating a rate about 1.5x greater for 1950-2000 compared to 1900-1950 when factoring in historical natural + anthropogenic forcing.

      Marzeion et al. 2012 provide an idea of expected contribution from glaciers by creating a model linking regional temperature and precipitation changes to mass balance. Figure 13.5 in AR5 chapter 13 shows the result of the Marzeion et al. model applied to GCM data. Expected glacier losses under this model are found to be slightly larger overall in the first half of the 20thC than in the second half.

      These two factors – thermal expansion and glacier mass loss – are expected to have contributed the majority of climatically-relevant historical SLR. In summary there is a relatively small expected acceleration in thermal expansion, and a small expected deceleration in glacier loss, according to these results. Overall there is only a very small expected acceleration – on average around 20% greater trend in the second half of the century – with zero acceleration a plausible outcome.

      Once you factor in the low standard of the observational network prior to about 1960 the likelihood of being unable to conclusively detect acceleration over the 20th Century becomes quite large.

      • Paul, I’ve made that exact argument (without being able to substantiate it) that when all factors are considered, the amount of acceleration is at the limits of detection, and that to make a good test, first you have to have an estimate of what the observed acceleration should be noise free, and the throw the noise on top. I swear, I’ll take the MMOC on R one of these days.

        Thanks for the Gregory reference. I didn’t have it, and as soon as the server clears up, I’ll grab it.

  4. N.B. The phrase is actually “toe the line”, from the early days in boxing when lines were drawn on the floor that the boxers had to “toe” (place their toes on) or be ruled unable to continue and thus lose.

    [Response: Thanks!]

  5. Never mind the impending melting of Antarctic and Greenland glaciers which will alter WHATEVER trend it was for the worse.

  6. I don’t understand how anyone falls for Watt’s argument. I think you could show that essay to kids in junior high and they could readily point out how ridiculous it is to assume that the rate will remain flat.

  7. Can I ask a question unrelated to sea level rise, but clearly related to auto-correlated time series?

    Please consider the idea of an hourly probability of precipitation, as displayed by Weather Underground, http://www.wunderground.com/weather-forecast/zmw:55450.3.99999

    What does that mean? If there was an uncorrelated 50% probability of precipitation each hour for 24 hours, then the odds of some rain that day would be essentially 100%. But we know that’s not what they mean. Anyway, if this is something you (Tamino) or anyone else here knows, I’d like to know. Thanks!

    [Response: Let me think about that one. Anybody have some insight to offer?]

    • The graph makes no sense on the face of it, regardless of auto-correlation. The weather icon annotation “chance of rain” appears against plotted hourly probabilities that would make rain almost certain. Presumably the graph is of daily precipitation probability distributed by hour, so the y-axis unit should be “% per day” or “% per 24 hours”. You should refer WU to xkcd on labeling your axes.

      BTW, Global warming is not linear anyway, so linear extrapolation of an outcome is a dopey choice….

      • That was my “best guess” because I figured they’d have to be consistent with the way people are used to previous forecasts. That is, if the graph showed 20% of precipitation straight across the day, I’d figure that was the same as an older-style forecast that simply said “20% today”. But then I’m not sure the absolute number in any hour really tells you anything without reference to the other 23 hours in the same day. Is a 30% hour the same in a day that averages 20% as it is in a day that averages 50%?

    • The odds of some rain that day would be essentially 100%, but over a large area, not at one particular location. The forecast model could predict with very high confidence that there would be light showers that day, and be proven correct, despite the fact that only 10% of locations actually got wet. So presumably, their forecast to the public would be a 10% chance of rain, because that’s your probability of getting wet where you happen to live, even though there’s virtually a 100% chance of rain somewhere in the area for which they’re forecasting.

      • Certainly. I think that’s always been the convention for probability of precipitation – the “standing still” odds. (Your mileage may vary if you’re a storm chaser.)

    • I would venture to suggest that it means the weather forcasters were using an ensemble of model runs for that day and there was rain in that location during that hour in 50% of the model runs.

  8. Bernd Palmer

    “they choose a statistical model which gives a low forecast: a straight line.” The last figure above (3rd degree) doesn’t show a forecast, or does it?

    And it doesn’t say anything about what’s causing the swings. Let’s say you were an observer in the year 1960, Would the 3rd-degree line have enabled you to forecast the reversal of the line in the following years?

  9. John Garland

    I’m not familiar with wunderground’s specific methodology, but I don’t think it is odd to say the the probability of getting a head if you flip a coin at 1:00 is .5 and the probability of a die turning up a 2 at 2:00 is .1667 and that if you do this at enough 1 and 2 o’clocks, you will see at least one head and one 2 100% of the time.

    I suspect that is what they are reporting: The instantaneous probability of rain at any particular moment (hourly interval).

  10. Is Watts consistently foolish? A foolish consistency is the hobgoblin of being paid to be consistently foolish. Which, if he is consistently foolish, would make him a paid propagandist, not a skeptic, I would think.

  11. I am a firm believer in the K.I.S.S principle – Keep It Simple, Stupid…

    While you’ve shown that the 3rd order poly fits best using AIC, the 3rd order AIC value is only slightly better than the 1st order – 845 to 854 (less than 1% difference). So is such a small difference in AIC really sufficient in saying that a linear fit is a bad fit as this article claims? The eyeball test seems to say that linear is plenty good to match the data, especially given the fluctuations in the raw data. To me, there’s not nearly enough justification to overrule linear as being a good fit, at least when looking at the annual variations in the data.

    [Response: It’s great to “keep it simple” — until you make it *too* simple. The likelihood of one model over another doesn’t depend on the *fractional* difference in AIC, but on the absolute difference.

    And, one can forget about AIC and simply test the cubic fit (compared to the linear) with standard statistics. One can also include an autocorrelation correction (the annual averages have much less autocorrelation than monthly, but it’s still there). Result: the cubic fit is demonstrably better than the linear.

    And both models are not appropriate to be extrapolated a century into the future.]

    • Dean1230,
      Unfortunately, this is the wrong way to look at AIC differences. The AIC is an unbiased estimator of the Kullback-Liebler divergence between a model and the “correct” model (which Akaike assumed was included in the set of models under consideration). The thing is that there will be a finite AIC even for the correct model (which we do not know) due to noise in the data, and that will be a constant term (which again, we will not know). Thus, the absolute AIC is meaningless. All that matters are the differences.

      Also, the AIC depends logarithmically on the goodness of fit (that is, on the log-likelihood) and linearly on the number of parameters in the model. So, in reality a difference of 10 in AIC between two models is huge–it’s like getting a likelihood e^10 times better.

      Note that the SIC/BIC has a more punitive cost for each additional parameter in the model, but even here, it’s pretty clear that a cubic will win.

      [Response: Technical note: I think the relative likelihood is proportional to e^(diff(AIC)/2), not e^(diff(AIC)). It’s still substantial. And without AIC at all, the result is still clear.]

      • Ok, I investigated AIC more – and see the point, but i still don’t agree that the practical implication in this case between a cubic and a linear curve fit is in any way significant. And if I had to calculate this by hand, I’d be more than satisfied with a linear fit of the data.

        The two lines – for all practical purposes in the period being fit – are identical! Any comment about differences between the two only show up in extrapolations of the curves and therefore questionable.

      • Tamino, that is correct. I was being sloppy.

        Dean1230,
        No, the cubic fit is not the same as the linear. The cubic fit comes much closer to the actual data points (look at the residuals), especially near the endpoints. This is precisely what you would expect if the underlying series is nonlinear. This is probably not something you can “fit by eye”.

        [Response: Two pieces of advice regarding data analysis which I try to emphasize:

        1. Don’t for get to plot the data, look at the graph, and apply what is referred to scientifically as “visual inspection.” It gives you great ideas.

        2. Don’t trust those great ideas based on visual inspection. Apply numerical analysis.]

    • I still don’t see a linear fit as being “too simple”. The difference between the two methods, when compared directly to each other, is negligible. Using the curvefit algorithms in Excel show that there’s no difference between the two (both have R^2 of 0.30 – the cubic is 0.3082, the linear .307). Only when you extrapolate do the two differ and that then brings in the need to heavily caveat the extrapolated data.

      And it’s not about extrapolating out 100 years, it is extrapolating at any timeframe. Extrapolating curve fits is dangerous work and fraught with misleading conclusions.

      [Response: First: your numbers are wrong. Whether using annual averages, monthly averages, or monthly anomaly, your numbers are wrong. Second: R^2 is NOT (I repeat, NOT) a proper test of statistical significance. Third: the difference between the two models is not negligible, which can be shown in multiple ways. Fourth: extrapolating to the very near future is legitimate. Fifth: I suspect you won’t believe me in spite of the fact (or perhaps because of the fact) that you’re wrong on all counts.]

      • I was mistakenly using the Norfolk data instead of the Sewell Point data, hence the error.

        That said, even using the Sewell Point data, there’s still no practicable difference between these two curvefits given the overall trend. The cubic may better catch the slight variations of the data, but there’s no way I’d say it’s “clearly better”. The biggest direct difference between the two fits is at the endpoints and even then it is only a fraction of the natural variation of the data. And the endpoints have to be taken with caution due to the inherent properties of the curvefits.

      • I am not saying that the analysis you provide is mathematically incorrect, I am saying that the difference you show is irrelevant in the real world. The differences between the two curve fits are negligible when compared to the inherent noise of the data set. Comparing the differences between the two curve fits, the maximum difference is about 20mm (if i’m reading the data right), and it oscillates – the 3rd order poly is over the linear by about 10mm early, under the linear by up to 20mm midway through the data, then over again late, with a clear turn-up at the end due in my opinion to the characteristic of 3rd order polys and not due to the data. The standard deviation of the raw data itself is over 100mm, so any curve fit differential is only a fraction of the overall noise of the data.

        [Response: I hope you don’t take this the wrong way … but I suspect you will.

        The proper “take home” from our discussion is: you really don’t know what you’re talking about. You should stop speaking as though you do know. I’m happy to help people understand when they’re receptive to learning, but I’m not interested in arguing with people who won’t believe me when I tell them.]

    • I was going to ask that question. Thanks for getting in first.

  12. I’m obviously not seeing your point… and would like to better understand it. That said, you’re lack of interest in explaining things to people who won’t simply take your word for it is disconcerting.

    [Response: Your failure to comprehend the difference between those who “won’t simply take my word for it” and those who won’t listen, is disconcerting.]

    Why is what you claim so much better than linear? The difference between a linear fit and a cubic fit is never more than 30% of the inherent noise in the system, and the largest variation is only at the end, where cubic endpoint affects seem to dominate. In the middle of the fit, the difference between the two is seldom more than 10mm out of 6800mm. The percent difference between a linear model and a 3rd order poly is never more than half a percent.

    [Response: Why not, instead of using 6800 mm as the absolute to make things look small (or bothering to find out what the zero point means), just call “sea level” the height above the bottom of the Marianas Trench. That should make the percentage difference *really* tiny.

    And while we’re at it, why worry about global warming of a few degrees when the difference between daytime high and nigthtime low can be 30 deg.C or more?

    It’s not my job to educate the unwilling.]

    • Question–Tamino, are you assuming normal errors on the data?

      Dean1230,
      OK, look–likelihood measures goodness of fit. AIC is 2k – 2x log-lik, so it, too, is measuring goodness of fit. If the difference in goodness of fit between quadratic and cubic is ~10, that means that the difference in likelihood for the two fits is ~e^6~that is the cubic fits the data about 400x better than the quadratic in a likelihood sense. That is significant.

  13. Snarkrates,

    Thanks. I can see your point from the calculation of AIC, and i’ll admit that if you want the best possible match of the data, then a cubic does do that better than a linear model.

    But given that the difference between the linear and cubic model give the almost identical answer (less than 0.5% difference – given the data as shown), then why is the linear trend not sufficient in describing the overall situation? Is that 0.5% difference really important?

    By the way, I also am not claiming that the linear trend is a proper prognostication of the future. In fact, these are purely mathematical models and ANY prognostication is extremely speculative! I wouldn’t use either of these mathematical models to predict future values as they have no physical meaning.

    Or said another way, if you gave the plotted data to a statistician and asked what they’d suggest I use as a curve fit, how many would say something other than linear? The eyeball method sure seems to say this thing is pretty damn linear.

    [Response: I AM a statistician.

    Given only the plot, most of my colleagues would tell you that it is approximately linear, but without analyzing the data they can’t really say whether that’s good enough. Because you can’t. I even pointed out (in a recent reply) that the “eyeball” method shouldn’t be trusted. I am constantly struggling both with those who are too lazy to look at a graph (and get some real insight and great ideas from it), and those who are too willing to go no further than looking at a graph, making claims based only on the eyeball. Weren’t you listening?

    As for extrapolating (beyond the briefest of time spans), nobody has claimed that’s valid. I didn’t — for either model, in fact I closed by saying “But, honestly, it’s not valid to extrapolate this statistical model to the end of the century. Prediction is hard — especially about the future — and extrapolating simple statistical models far into the future is a very poor way to go about it.” As a matter of fact, the folly of extrapolating statistical models was the point.]

    • Dean1230,
      No. The AIC is not about getting the “best fit” to the data. If that were true, you’d simply use an n parameter fit for n data points! AIC is about maximizing predictive power of the model–hence the penalty term in the number of parameters of the model. AIC is not an ad hoc quantity–it is related to the Kullback-Liebler divergence, which is a pretty fundamental way of measuring the difference between two distributions. Once the AIC gives for a higher-order model is significantly less that for a simpler model (and significant is usually taken as around 3 or so), you are justified in using the higher-order model. Look into the book by Burnham and Anderson:

      Burnham, K. P.; Anderson, D. R. (2002), Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach (2nd ed.), Springer-Verlag, ISBN 0-387-95364-7.

  14. By trade, I’m an engineer. I have always had to struggle with how much information is necessary to describe the situation. In taking numerical analysis classes during school and in practical applications for 20+ years, I always want to look at the error introduced by the assumption. In this case, the error introduced due to assuming a linear trend is minimal, and not much different than the error introduced assuming a 3rd order polynomial.

    I also live by the credo that the enemy of “good” is “better”. Once you find something is “good enough”, stop! And yes, that’s totally due to my being an engineer.

    [Response: I think there’s an old saying, “Don’t the the perfect be the enemy of the good.” It’s well worth heeding.

    But consider this: the real issue (as defined by Watts himself) is whether or not there’s acceleration in the data. The linear model necessary has zero acceleration. Hence for answering the question at hand, it’s not even “good.”]

    • By training, I am an engineer as well, and most engineers are trained to use the right tool for the right job, sure you can bang in a screw with a hammer (your “good”), but that doesn’t mean there is no justification in using a screwdriver (your “better”). You say that the error of the linear model is minimal, the AIC says that the reduction in the error for the cubic model justifies the additional complexity. The worst kind of engineer is the sort that sticks with his or her intuition/eyecrometer, when well established practice shows their intuition to be faulty.

      Time series analysis is Tamino’s territory, do try to learn from him.

  15. I’d like to try to add to this discussion. Regarding whether or not one model is better than another, with respect to the practical difference over the observed time frame, I can recommend a particularly important factor with respect to cause. Watts claims the cause of the increase in sea level is a simple (linear) continuation of land subsidence. Question, why would land subsidence be fast then slow down and then speed up again, as the cubic shows? It shouldn’t. But if the cause is global temperature, then you’ll tend to see more melting and ocean expansion during the faster-warming periods, and less when the earth isn’t warming so much. The cubic fit of sea level rise fits much better with the latter hypothesized cause (here I’m fitting by memory rather than by eye, so take with several grains of salt).

    • Steve L.,
      That is indeed an important point. The time dependence cannot necessarily be extrapolated into the future. However, it can preclude–or at least make it difficult to rationalize–some causes.

  16. I am perhaps treading where I should not with respect to the conversation between Tamino and dean1230. I’m a regular reader but only post here very occasionally. Consider me a layperson with a deep interest in climate change and its consequences. I had one introductory course in stats a long time ago.

    Here’s my take away. If you just have a linear fit, that doesn’t tell you much. When you perform the 3rd order poly, and can justify doing so (as Tamino has done and even dean1230 admits, it seems), you get a curve that potentially tells you a lot more or at least points you to some potentially important considerations: why was there a flattening of the curve between the late 50s and 1980? And why did the curve turn upward after that? What’s possibly going on? How might we compare these data with other indicators of sea level rise or with factors possibly associated with that sea level rise? With just the linear fit, none of this is evident. I’m sold (so far) that in this case better is better than good.

  17. First, I won’t concede that the linear fit doesn’t tell you much. It tells you a lot! it tells you that the primary and only significant trend is linear, and that any other effect is very minor when compared to it.

    [edit]

    [Response: Who said the linear trend “doesn’t tell you much”? That’s just a straw man. I did say that it tells you *nothing* about acceleration. If you want to make yourself look like an idiot, dispute that.

    As for “significant,” this post illustrates that over the time span extrapolated by Anthony Watts, the acceleration already observed would have a gigantic effect. Or do you regard eight and a half feet by century’s end as not “significant”?

    We tend to use the word “significant” in its statistical sense, in which its significance is indisputable. Yet you didn’t just “ask” about that, you pronounced judgement on it (mistakenly), by calling the change in AIC too small to matter because you hadn’t a clue what it meant, and by using the wrong statistic (R^2) to evaluate significance (getting the numbers wrong in the process).

    Since you’re unwilling to learn, the least you could do is stop distracting those who are.]

    • Tamino writes: ‘Who said the linear trend “doesn’t tell you much”?’

      Dean1230 might have been responding to Charles Scott, two comments up-thread, who said more or less those words.

      BTW, thanks for the post that started this thread. Very interesting.

  18. John Garland

    I don’t mean this as a personal comment, but rather as a more general one: In my experience with numerically literate people, engineers, or people trained as engineers (e.g., my own brother)–seem to populate the denier side of the climate equation far more than any other such numerate group. Certainly some famous deniers with their own sites are engineers as well as my own observations.

    Are my observations correct? If they are is there any reason? Don’t post this if you think it will lead to namecalling, etc. That is not my purpose. The purpose is pure curiosity.

    [Response: It may well lead to name-calling and pointless argument, but that’s the nature of the internet. It may also lead to enlightening discussion. It will be my task to send the useless stuff to the trash bin.]

    • I am an engineer by training, and I see no reason to fundamentally doubt the basic findings of the IPCC. The funny thing is that many of the basic ideas can be explained in terms that will be straightforward for engineers to appreciate, such as how the mass balance argument shows us that the rise in atmospheric CO2 is anthropogenic.

      I suspect it is more the case that skeptics tend to use their engineering background as evidence that they speak from a position of greater mathematical/scientific authority than their intended audience. Those arguing for the mainstream scientific position don’t need to do this as we are in agreement with the scientific research community, the IPCC, most of the worlds scientific and engineering institutions, 97% of journal papers that take a position on whether climate change is mostly anthropogenic etc.

    • John, you’re not the first person to notice this. Google “Salem hypothesis” to learn a bit about the history of this idea.

      My personal experience supports the hypothesis. I think the problem is that engineers are not as numerically and scientifically literate (or sophisticated) as they think they are. In fact most of them have no understanding of the differences between science and engineering. And most of them have practically zero understanding of statistics and how to analyse data.

      As somebody with a science/maths/stats background working in an organisation chock full of engineers I am appalled by the basic mistakes they make on a daily basis. On the plus side, they make me look far more skilled than I really am :)

      As for climate science denial – the head engineer for the whole business is a vocal denier. This is a quite sad because we are a business that is compeltely dependent on rainfall (a hydro electric company) and we have *already seen* significant climate changes affecting our business. Luckily the CEO and the rest of the business accepts the science and runs the business accordingly, ignoring the regular anti-science rants from the head engineer.

      • I notice similar things (at least in software, which is certainly more related to engineering than science..)

        My hypothesis is along the lines of this..

        As someone who trained as a scientist first and then went over to programming computers on the grounds of money, my default approach to a problem is to look at it, try a few different approaches, gather evidence based on these approaches to move towards a solution. Basically an experimental approach.

        And I often find that people who have trained as software engineers will misunderstand this, preferring to try to design the best solution up-front and them implement that. This does seem more of an ‘engineering’ approach.. and certainly if you are building a road bridge you don’t want a few experiments to collapse into the river first.

        (As an aside, software falls into a grey area.. it’s not constrained by physics in the manner of physical or electronic engineering, so although it should be possible to deduce the best solution for a programming problem up front, in practice it’s almost impossible to get right for any non-trivial problem. Hence, all software is wrong.)

        And the real problem with up-front thinking is that it discourages thinking after the solution is arrived at. So if our stereotype engineer comes up with the solution that climate sensitivity is low for whatever reason, they are less inclined to revise that because of additional evidence.

        Bear in mind that the above is chock-full of poorly-justified generalizations and should not be applied to any individuals…

      • “talk.origins”…I feel like Obi-Wan: “Now there’s a name I haven’t heard i a long time.” Haven’t been there since 2400 baud dialup and Usenet.

  19. Given the ellaborate three steps of the math conducted the people from WTFWT, it’s no wonder they found no acceleration in the data.

  20. John Garland

    Some examples: Sununu, Gavriel Avital (Israel), Rutan, Simmons/Hoffman (Resilient Earth), the NASA engineers (http://sppiblog.org/news/former-nasa-scientists-astronauts-admonish-agency-on-climate-change-position), the recent qualitative research study of Canadian petroleum engineers noted in Forbes (http://oss.sagepub.com/content/33/11/1477.full) and many others.

  21. Under current warming (BAU) we see 3-9 feet of sea level rise. Not curve fitting. Just extrapolation based on paleoclimate evidence and rates of glacial destabilization.

  22. Horatio Algeranon

    Foolish Lyin’ works too.

  23. I want to thank Paul S for his response, in case it gets lost, and point out that the question raised in Gregory 2013 about the data being good enough to observe acceleration is important. Maybe a discussion of this is more worthwhile than the linear/cubic approximation.

  24. Horatio Algeranon

    “Foolish Lyin'”
    — by Horatio Algeranon

    Extrapolatin’ lines of fits
    Is truly Foolish Lyin’
    About as stupid as it gits
    And somethin’ we ain’t buyin’

  25. You can’t judge the VIMS report by its cover…The lumpy trend map comes from http://sealevel.colorado.edu/content/map-sea-level-trends and is based on satellite data since 1993. On that time scale, variations in the gulf stream current have significant effects on the water levels in Norfolk. See http://tidesandcurrents.noaa.gov/publications/EastCoastSeaLevelAnomaly_2009.pdf )

    Over the longer term time scales discussed in the VIMS report, the glacial isostatic rebound is an important factor, one that peaks along the east coast in the Maryland/Virginia/North Carolina region. (per about page 22 of http://co-ops.nos.noaa.gov/publications/Tech_rpt_53.pdf ) It accounts for about 50% of un-accelerated bottom line shown on Darla Cameron’s/Lori Montgomery’s graphic. The steady long term subsidence in Norfolk would be only an additive term, not a 50% multiplicative term on any of the climate-change projected lines on that chart.

    And 4.44mm/year isn’t insignificant–over the course of a 30 year mortgage, that’s more than 5 inches more water for a storm to work with, which isn’t small in a place with lots of waterfront properties very close to sea level. Will a waterfront retirement home be worth giving to your children?

  26. The foolishness seems to be to draw any kind of conclusion from a single, or even two, tide gauges. They are all notoriously influenced by land heave, subsidence, glacial rebound etc. Are any tide gauges linked to GPS satellites? Surely that would enable one to remove any land movement effect?

    Anyway, why bother when there is perfectly good and very well researched global satellite data from the University of Colorado at Boulder. It would be much more interesting to know if an analysis of that data shows acceleration or not.

    [Response: Some tide gauges have recent GPS data available, but only for a very brief time span, as far as I know it’s not long enough to establish the impact of other factors with sufficient precision. The satellite data also cover a very limited time span, one which is not sufficient to establish acceleration or deceleration beyond the influence of known factors of variation (like ENSO). Only tide gauge data provide a long enough time baseline. But using only one or two locations seems foolish.]