Global Temperature: Why So Many Straight Lines?

Probably the most commonly used way to estimate a trend in something is a mathematical process called linear regression. Basically, it means to fit a straight line [for those who must be pedantic, a flat hyperplane if we have multiple predictor variables]. In the case of time series, use time as the predictor variable and look for a linear relationship. If we find it, we declare “Trend!” and might even posit how big it is.

Why linear? Does anybody really believe that global average temperature since, say, 1970 has followed a straight line? Couldn’t it have wiggled around a little, just a little maybe — not noise, mind you, but genuine signal, real climate change rather than random fluctuation? Might it actually have accelerated, or even decelerated, or — heavens forbid! — taken a “hiatus”? Hell, mightn’t there have been brief episodes of all three, just not strong enough to be detected statistically (for a stickler like me)?

Of course. To my mind, the idea that as far as global temperature goes the climate — the signal, not the noise — followed a perfect straight line, is ludicrous.


So why the hell do I fit so many straight lines? I do it all the time, and if it’s so obvious that the signal is not a straight line that nobody can get away with the idea, as often as not I’ll resort to making a model out of straight-line pieces.

Then I can wax philosophic about the trend rate during each episode, i.e. along each straight-line piece, and estimate not only how fast it’s going, but how uncertain we are about how fast it’s going.

And I’m not the only straight-line maven. Far from it. Very, very, very far from it. Straight-line models (and that means linear regression) are everywhere. Global temperature, local temperature, rain, drought, snow, ice, rate of CO2 growth, the rate of growth of the rate of CO2 growth, … they’re everywhere.

For all those physical variables, the idea of perfectly linear trend is ludicrous. In many cases, just looking at a graph makes one question whether or not the linear-trend model is even useful, let alone “correct.” Yet linear regression persists. I think a lot of people, including many scientists, don’t fully appreciate what linear regression is useful for, and in some cases damn good at.


I’ll offer my opinion, that the most fundamental use of linear regression is to confirm or deny that the trend is doing something nontrivial.

By “trend” I mean the signal, the expected value apart from the noise, and by “doing something” I mean anything except lying there flat as a pancake going nowhere.

Whether or not things are changing is one of the most common and important questions in all of science. That’s the same as whether or not the trend is doing something other than going nowhere. Note that according to my terminology, there’s a trend even when it’s going nowhere and doing nothing — it’s just a flat trend. Others would say that if it’s not doing anything, there’s no trend at all. Po-tay-to, To-mah-to.

For answering this most fundamental question, is there something or nothing, linear regression is terrific! It’s one of, if not the (in many cases) most powerful methods. I believe that the source of its power is the fact that the null hypothesis — that the trend is doing nothing — is exactly the question of greatest importance. The fact that the “alternate hypothesis” (so says the statistician, the climate scientist might say “model”) is a straight line does not (did I say that strongly enough?) mean that the real signal is a straight line. Not. Did I emphasize that strongly enough?

If linear regression confirms something going on, we can generally rely on it’s answer to the additional question: is it heading generally up or down? But don’t forget that the estimated rate of change that we get out of linear regression is really an estimate of the average over the entire interval. The true rate might not be following that straight line model.

But hey, we knew that. Everybody knows that! Basic stats, right? Nobody would ever take a powerfully significant linear regression (statistically significant, that is) and use that alone to conclude it’s following a straight line? Especially when there’s further evidence that it’s not only doing something, it’s doing something besides just that straight-line stuff.

Alas, too often, even in the scientific literature, I see that basic mistake of extending a linear relationship or interpreting it as physically real (not just a good model mind you, but physically real) when there’s no justification for taking it that far. I won’t be naming names.

None of which negates the tremendous usefulness of getting good answers to that most basic question. Linear regression has weaknesses (like all methods) and complications (love ’em!), but it remains a powerful, efficient, and effective way to test whether or not change is happening in scientific data.


When the data actually do follow a straight line, not perfectly perhaps but close enough to make a model that’s downright useful, then the rate of increase or decrease will be constant, and it’s very good to know what that rate is.

In many cases we can say that linear regression is the best method, meaning it gives the most precise and accurate answers. In some of the exceptional circumstances, that tend to trick analysis, we have clever methods to avoid the pitfalls (linear regression isn’t just least-squares regression, you know). Here’s another fundamental usefulness of linear regression

If you want to rely on the idea that the trend is linear, I think you should either have a compelling physical reason to support it, or you should search the data for evidence that there’s something more.


The linear model means a straight line, and that means whether it’s up or down it’s going at a constant rate. What if the rate is changing? Wouldn’t linear regression fail to detect that?

Of course it would. If your model doesn’t include rate change, it’s never going to detect rate change.

That means you need another analysis. A common choice is to fit a quadratic function of time, a model that allows rate change. Then we test the quadratic term (which is responsible for that rate change) for significance. If it passes, we can declare that the rate is not constant, and even give a decent answer whether it’s getting faster or slower.

Again, that doesn’t mean that the signal is actually following a quadratic curve. But it can confirm that it’s not just a straight line, and give us an idea of how large the effect is.

A quadratic curve is only one choice. Another is a function made of two straight-line pieces joined at their endpoints. I call it the continuous (joined at their endpoints) piecewise-linear (made of straight-line pieces) model. It too allows for a rate change, but only a single, sudden rate change. Just when that happens, is one of the parameters of the model.

There are enough possible such models (enough “degrees of freedom”) to make the stats rather complicated, in particular the choice of changepoint time, the moment when the rate changes. But it can be done, and it turns out that the continuous piecewise-linear model is very powerful for detecting rate changes. It’s one of the main weapons in my arsenal when I going looking for that.

This is, essentially, another fundamental usefulness of linear regression, rooted in the fact that any function [for the pedantic: bounded smooth] can be approximated by a continuous piecewise linear function as closely as we want. For high precision it might require a lot of pieces, but it can be done.

And now we come to a drawback. The statistics of fitting multiple straight lines must be tested with great care, it’s oh so way too easy to get a result one thinks is significant (rate change!) which really isn’t (sorry!). The drawback is that although the piecewise linear model (continuous or not) is terrific for statistical testing of trend change when done right, it’s also so easily done wrong that it’s the source of far too many mistaken results published in the scientific literature. I’m not naming names.

Do bear in mind that the piecewise linear model is just one choice. There are polynomials, smoothing and averaging filters, splines, you can get exceptionally fancy if you want to (wavelets and singular spectrum analysis). But for reliability and power of testing whether rates have changed or not, those piecewise linear models are among the best.

Perhaps the best use of linear models (such as piecewise linear) is that they enable us to say when there is *no evidence* for a rate change. There are so many claims of rate change, often used unwisely (sometimes nefariously) that such is a very fundamental usefulness.

Sometimes, they even make good models. The piecewise linear model of global average temperature (data from NASA) shown in this post is one example. The model isn’t just useful, it’s competitive with other statistical models including some pretty fancy schmancy stuff. Let’s face it, sometimes things actually do follow a straight line very closely. Damn closely.


This blog is made possible by readers like you; join others by donating at My Wee Dragon.


Advertisement

15 responses to “Global Temperature: Why So Many Straight Lines?

  1. Thomas Passin

    I recently found a Python module that does piecewise-linear fitting (pwlf.py). You tell it the number of segments to use, it finds the break points. It seems to work pretty well on the global temperature data. What’s interesting is that up to six segments it does not find any break point between 1980 and the present (with more segments, it seems to get tricked by a large excursion in the early 1900s – HADCRUT data – but still doesn’t find a recent-year hiatus).

    So even with piecewise-linear approximations, no “hiatus”.

    One way to think about the use of linear line segments vs curved ones is this. How could you verify that a curve was a better fit than a linear segment? You’d need a small statistical uncertainty in the data to confirm the fits. But the noisy climate data have such large noise that you’re lucky enough to get the straight line fits to be significant. You’re not likely to reduce the noise enough to justify the small differences between a piecewise-linear fit and various possible curved ones.

  2. Hello Tamino,
    I read your response to my question on another thread and have been working through some things. However, I am stuck on the thing that you again mention above:

    ” A common choice is to fit a quadratic function of time, a model that allows rate change. Then we test the quadratic term (which is responsible for that rate change) for significance.”

    I have the ability to do a t-test. But I do not actually know what is meant by testing the quadratic term. What vector of values do I use?

    Again, any assistance would be appreciated.

    David

  3. Reblogged this on jpratt27 and commented:
    Time for climate action to reverse direction before we’re all cooked.

  4. Always good to step back and consider the conceptual frame and its implications. Thanks!

  5. Hello Tamino,

    I have worked it out. :)

    David

  6. Thank you. You answered my question from a few posts ago :-)

  7. Timothy (likes zebras)

    “To my mind, the idea that as far as global temperature goes the climate — the signal, not the noise — followed a perfect straight line, is ludicrous.”

    Is this ludicrous?

    The first-order approximation for global warming is that you have a relatively weak greenhouse gas that has increased in concentration gradually over a long period of time. Because it is relatively weak the year-to-year fluctuations in how fast its concentration is increasing have little effect.

    Then you have the Earth’s climate warming in response to this but, because the Oceans are large and the heart capacity of water is also large, it takes a relatively long time (compared to monthly average measurements) for the climate to warm up in response to the accumulation of the relatively weak greenhouse gas.

    You really wouldn’t expect anything but a linear response. It certainly wouldn’t be ludicrous.

    And, really, isn’t that broadly what we’ve seen? There have been some minor deviations, most plausibly due to changes in the rate of volcanic activity, perhaps due to a change in solar output and also plausibly due to efforts to reduce sulphate emissions (though there may also be observational biases, particularly around WWII that create a spurious deviation from a linear trend).

    [Response: Emphasis on the word “perfect.” Yes there are many cases in which it’s close enough to linear that such encompasses even the essential physics. But perfect?]

  8. If you have some idea of the model the errors on your data follow, you can do a pretty good job comparing models using generalized linear models–which extend Maximum Likelihood methods to model comparison. If the models you are using have different numbers of parameters, you can account for this using information criteria (e.g. AIC, BIC…) to penalize the more complex model. This also allows you to assess the significance of the higher-order terms in your model (e.g. quadratic term, multiple line segments…).

    Finally, you can use the likelihood/AIC/BIC to develop weights for model averaging, if there is no reason to prefer one model over the others.

    As to linear fits vs. higher order–I look it sort of the same way as a Taylor expansion–the linear gives the first order idea of what is happening. If you want higher-order terms, you need much more data.

  9. Absent other forcings the human-caused warming trend will follow some form of sigmoid curve.

    Unless it runs away, but the physics suggests this is unlikely. Small comfort though, as humanity will perish long, long before there’s a Venutian-like climate…

    • As a coda, I suspect that if the cooling trend of the last millennium, and the mid-20th century aerosol cooling effect, were both accounted for there’d be a sigmoid warming response in the post 1880 record sufficiently clear that even a naked eye might perceive it…

  10. Speaking of “hiatuses” I happened to be doing some reading on Super Bowl history and I noticed that there was another “highly significant hiatus” starting in 1998 :-o and running to 2006. In that time period heads only came up once! P-value here is about .006 so it MUST be a real hiatus. Or else there is some conspiracy within the NFL to bias coin flips!

    BTW the overall distribution is nearly even (25H, 27T).

  11. Whoops…I mistyped in my binom formula…the correct probability is p <= .02…of course the same "reasoning" applies.

    • In high school a buddy and I decided on a whim to investigate the laws of probability via prolonged coin tossing. But we didn’t know what to make of the time the coin ended up on edge–except maybe that reality could be nearly as messy as his room was.

      • Interestingly–have been watching the Patriots-Chiefs joust in a background sort of way–it seems very likely that the primary single determinant in that game was the coin toss at the beginning of overtime: that first overtime possession is sudden death for the defending team, and there is a pretty high probability of either Patrick Mahomes or Tom Brady being able to engineer a clutch touchdown drive. But the coin toss decreed that Brady got his chance to do so first.

        Lest that seem as if it’s ‘just football,’ reflect that that coin toss also was a prime determinant in driving patterns of US airline bookings this morning, and patterns of airline travel for the next two weeks, especially for the first Friday and Saturday of next month. Hartsfield-Jackson already turns out to be the busiest airport in the world in many years when they tally the numbers, but it’s going to be a bit extra-busy for a little while, presumably giving 2019 a bit of a boost in that regard.

        What may follow on from that, I can’t say. Chaos theory, anyone?

        Oh, and the winning call on the toss was “heads.” So maybe it’s lucky for him that the New England captain hadn’t read jgnfld’s comment on Super Bowl coin toss history. He might have picked “tails.”

  12. I agree with what you say, but just can’t resist being pedantic about the terminology, in my opinion. “Linear regression” can be used to fit any model that is linear in the parameters, not just straight lines. So the quadratic model and various other polynomials can also be fit by linear regression (according to the jargon in the fields I’ve been working in). Non-linear regression is required for models with more complex forms, e.g. y = a*exp(bx) unless you can linearize them (by taking logarithms in this example). However the linearization distorts the underlying error structure in the data, which may affect any significance tests you try to do.

    Thanks for this blog combining two of my favourite topics; climate change and data analysis!