Still getting comments from *Dean1230* about this post. He insists that the cubic fit is insignificantly better than the linear model (for the tide gauge data from Sewell’s Point), and that the cubic model shouldn’t be extrapolated (which I said in the first place).

I suspect he thinks he now has ironclad proof of his point. It’s based on extrapolating curve fits based on limited data, ending before the present.

There’s this one:

Snarkrates,

In another post, i suggested doing curve-fits up to 2000, and then projecting them from that point forward. What happens with AIC if you only use the data to 2000? Does it still exhibit the much higher predictive power?

There’s also this:

No, if you limited your information to ending in 1960, then by definition the end of that line in a cubic would be up-turned. It would not be a predictor of the future. But it still has a better AIC than the linear model.

If I did this right (using annual data for Sewell Point and the linear and cubic fit models in R), the AIC for the linear model for the data up to 1960 is 316.7091, the AIC for the cubic fit is 311.7673, hence the cubic fit is “better”. And yet, it still had no predictive ability. If we extrapolated the cubic fit from 1960 to 2010, the predicted rise would be almost 2 meters! The actual rise is about 200 mm.

Dean, you don’t know what you’re doing.

First of all, you’re mistaken that “by definition the end of that line in a cubic would be up-turned.” It depends on when the inflection point occurs. In this case a cubic does end up-turned, but not “by definition.” That’s a minor point.

It’s true that a cubic model has lower AIC than a linear one for this time span. We could even call the difference “significant.” Here’s the data, up to and including 1960, together with a linear fit (in blue) and cubic fit (in red):

But it’s not true that AIC suggests the cubic model is best. In fact it specifically suggests it isn’t.

I didn’t pick “cubic” out of a hat. I picked it because it had the best AIC of all polynomial models from 1st to 10th degree. Let’s see what happens when we do that using data only up to the year 1960:

It turns out that it’s not the cubic model which gives the lowest AIC, it’s the quadratic model. Adding a cubic term isn’t justified by this analysis. But Dean, you didn’t think about that. Because you really don’t know what you’re doing.

If we extrapolate the quadratic model up to 2010, we would **not** conclude that “*the predicted rise would be almost 2 meters!*” We’d get this:

This further illustrates **my point**: that extrapolating statistical models to make predictions is fraught with danger at best, and a fool’s errand at worst. Yes, that was the point.

Now let’s consider what it was that I actually said about the cubic model in the original post:

Of course it’s only a model and maybe (almost certainly in fact) not the best one, but it does prove (in the statistical sense) one thing: that the trend is not a straight line. It’s not. Claiming that it is, is foolish.

Note that I said the cubic model was almost certainly not the best one. I certainly didn’t flatter it. The only useful result I ascribed to it was to prove (in the statistical sense) that the trend is not a straight line.

I went on to extrapolate the cubic model, not to suggest that such was a good idea but to show how different it was from a linear extrapolation.

Then, I stated explicitly that it’s *not* valid to extrapolate this to the end of the century. Same goes for the linear model. **That was the point**.

Dean, you have two choices. First: you can repeat this exercise using data up to and including 2000. That will enable you to come back here and continue to argue with those of us who know what we’re talking about. You don’t. Second: you might admit to yourself that you don’t know what you’re talking about and actually *learn* something.

It seems to me that those two choices are mutually exclusive.

To continue your pedagogical work you should produce the projected uncertainties of the extrapolation for each model. I suspect that future errors bar growth much faster for higher order polynomial.

[

Response:Indeed it is, and I’ve often warned that polynomials are a terrible choice for extrapolation because they explode to infinity very rapidly. But such projected uncertainties are pretty much useless because they assume that the model is correct; in this case it’s a safe bet that it’s not.]Given the small sample sizes involved, you should probably be using AICc, which will penalize additional parameters more.

Ok, let me say this as carefully as I can.

[edit]

[

Response:Let me say this as plainly as I can: you had a real opportunity to learn something, but you’re so interested in clinging to your mistake that you won’t take it.]6) If your point is to not extrapolate the data – which I fully agree with – then why attack the choice of a linear model at all?

[edit]

[

Response:Because the linear model has been proposed, not just as the right way to prepare for the danger of future sea level rise, but as the only way allowed by law to do so. Yes, that’s what the North Carolina state legislature proposed.Perhaps we can agree on one thing: this blog is not for you.]Another poor model that is often proposed is that the systems aren’t changing… that Y_{t+n} = u_y+e_{t+n}, with e coming from some unsurprising distribution. I think the statistical models are good at disproving those sorts of models as well.

For good extrapolation, I think you need a physics model. But then the statistical models could then examine the residuals and see if the physics model is missing significant things.

Dean,

The original post linked to Watts, who proclaimed no evidence of acceleration, and based on that used his simple linear model to project into the future, which he compared to a projection from a WaPo article. Tamino said multiple times that projecting that far is not a good idea. How many times must he say it?

Tamino’s point was explicitly,

NOTto extrapolate, but to show with some analysis, that there are better models. Not a best model, just better models. His point was just to show that Watts didn’t do enough work to show that his assertion regarding a lack of evidence of acceleration is true. Tamino did the work, and found evidence.I use models too, when evaluating the efficacy of a vaccine in development. Not really the same thing as the topic at hand here, but when I evaluate the efficacy of a vaccine on some population of animals, there are more appropriate models to use when assessing if one vaccine provides superior protection to another. That’s the point. Some models are more appropriate than others.

That’s was the purpose of my question. AIC is a very useful tool to provide the best descriptions of data you have in hand. But, outside the observation range this another question. Extrapolation is a mathematical form of inductivism, which is dangerous.

The problem here is that in policy you have to extrapolate from what you know. Maybe the appropriate way if extrapolating from the current observation in absence of physical model is to present the error bar of the extrapolation for a few statistical representation.

I am thinking out loud. This is an epidemiological question with practical application. Maybe someone used the decision theory under uncertainties could help.

[

Response:Even presenting the error range for the model projections is perilous — because if the statistical model doesn’t subsume the real behavior (and it almost never does) then that only represents the possible error due to model misfit. For example: if our model is “no change at all,” then the error bar on the estimate will be easy to compute and will also be constant, but if that model is wrong, both the forecast and its error bar are useless.I suggest a much better way to project the possible range of sea level rise is to involve some physics. For Watts and friends, the problem with that is that they don’t like the answer.]@[Response]

The need to ‘involve some physics’ in SLR projections is of course evident with the AIC analysis yielding a cubic result. This results in a profile akin to the global temperature profile, even suggesting what that physical relationship could look like at Sewell’s Point. The physics itself tells us that projections of sea level will need to account of future global temperature, a relationship investigated directly by for instance Vermeer & Rahmstorf (2009). (Of course Cap’n Watts and his mottley crew also make much of disputing projections of global temperature.)

I’m wondering if dean1230 is tripping over the difference in meaning between “significance” in statician’s speak and “significance” in Every Man’s speak.

All I can say is it’s good thing Sewell level peaked in about1965, as shown on your last graph with the fat red line (the only line that matters on graphs)

Otherwise, I’d be worried about all this talk of rising seas.

“Premature Extrapolation”

— by Horatio Algeranon

Sea-level peaked in ’65

By quad extrapolation

After that, it took a dive

Thank God for mathturbation!

:)

“The Fat Red Line”

— by Horatio Algeranon

On graphs, we see a fat red line

Which means “Look here, this is a sign”

That tells us what is yet to pass

The future’s red, in crystal glass

Is this the time to mention that if you calculate the slope of the first 43 of 86 years in the annually agregated distribution and project it forward, it clearly (though not quite .05 significantly) underpredicts the slope found in the points for the next 43 years (4.4 mm/year +/- .4 versus 5.2 mm/year +/- .5)?

As well, the variance appears to be increasing in recent decades, but again, the numerical significance does not reach the level of standard significance.

Tamino, you are a good teacher. Thank you for seeing the moment.

Here’s my take on this.

Fitting a polynomial is an exercise in describing what happened. Choosing between a linear fit and a cubic fit allows you to distinguish between the “acceleration” and “no acceleration” cases. This is an interesting question in and of itself.

The fits show an interesting pattern — sea level was not increasing at a constant rate. In fact, the rate was decreasing at one point (though always positive) — but now the rate is increasing.

You can’t use this cubic fit to extrapolate because it has no physics behind the math. It’s just a description of what happened. If you want to know why the curve has the shape it has, one should go back to sea level models and look at what could be driving the change in sea level.

Isn’t a pretty good description of what climate modeling usually does? That is, it simulates the physics in order to see what factors can account for a given set of observations (or sometimes posited outcomes or boundary conditions.)

That, at any rate, is what I’ve taken away from various readings on the topic, including snarkrates’s comment that the real point of modeling is not so much prediction as understanding (though predictions/projections get made simply because we need them so badly.)

I thought the real point of climate modeling (and climate research in general) was to get more grant money (and hockey sticks get made simply because mann needs them so badly )

My only point was simply a restatement of what I think Tamino is trying to say — this (cubic vs. linear) is not modeling. It’s using math to make a quantitative description of what the data looks like.

Dean seems to be confusing the two.

Ice warming under mechanical stress (gravity) is inherently discontinuous, while our ice dynamics models assume continuous domains. .Statistical data collected on colder ice does not predict how warmer ice will behave.

We know that there are many deep fjords under big ice. This opens the door to calving events as seen in Chasing ice at minute 64. Starting May 7, 2014 there was another significant calving event. http://neven1.typepad.com/blog/2014/06/jakobshavn-calves-another-big-one.html

The published ice driven sea level rise estimates may be optimistic. We may lose a lot of ice into the sea via calving.

At this point, ice modeling is still mostly about teaching.

dose AIC scores have error bars?

thanks jacob l