Any given time series, say July average temperature in the Moscow region since 1881, might exhibit both short-term (more brief) and long-term (more lasting) patterns of change. The longer-term is certainly something worth knowing about. It might be increasing, or decreasing, or it might not be changing at all. It might have wiggled around a lot but not really gone anywhere until some new factor came into play. But whatever its pattern, we usually identify the longer-term pattern of change with the trend.
What we’re really after is the background level against which temperature variations have their sway. By “trend value” I mean exactly that: the background level at a given moment. If it changes while the nature of the fluctuations remains the same, the probability of record-setting extremes will of course change. When the background level is colder we’re more likely to get cold extremes, and when it’s hotter we’ll get more extreme heat. Pretty simple.
Some reserve the word “trend” for the linear trend. There is certainly value in knowing the linear trend, one can’t deny its utility, it tells us about the longest-term trend. If we write the book of a time series in polynomials it is the first chapter, and is most responsive to the longest time scale behavior. But it is hardly the whole story. Sometimes it’s most of the story, and often it’s the most obvious part, yet there can be more, or less, than meets the eye.
Here for instance is that temperature data for Moscow:
It’s the data used by NASA GISS but “as is,” without the homogeneity adjustment which GISS estimates.
A linear trend gives a small but significant slope of 0.012 deg.C/yr:
If we restrict ourselves to a linear trend, then the July Moscow temperature has gone up by about 1.5 deg.C over the given time span. But — more relevant to the 2010 Moscow heat wave, if the linear trend represents the background level then the observed 2010 value is fully 3.667 standard deviations above the background. That’s a very big fluctuation indeed — for a normal distribution [note: it doesn't follow the normal distribution] something that large or larger would only happen once every eight thousand years — so it’s the kind that is bound to be exceedingly rare. In fact it’s such an extreme fluctuation that it makes me doubt the “linear trend represents the background level” theory.
Statistics also tells us that the linear trend is not the whole story. If we add a quadratic term to our model trend then we find it too is statistically significant:
This quadratic fit might or might not be a good characterization of the “trend,” but apart from that it has revealed the presence of non-linear trend in the data with statistical significance. It also suggests that most of the warming of the trend has happened recently. Because of that, the 2010 value is not quite so extreme, “only” 3.33 standard deviations above the background level. That’s still exceedingly rare, but not so much so as the linear model.
Of course we could try 3rd-degree (cubic) or 4th-degree (quartic) polynomials, or even higher. Let’s try all degrees from 1st (linear) through 10th, comparing models by AIC (Akaike Information Criterion). Here are the AIC values as a function of polynomial degree:
The winner is the model with lowest AIC, which is the 4th-degree (quartic) polynomial. It looks like this:
This too may or may not be such a good representation of the “trend,” but it is better (with statistical significance) than the linear and quadratic models. It suggests even more strongly that most of the increase has been recent. And, it reaffirms that the trend is not linear. It just isn’t.
Because of that non-linearity, the 2010 heat wave value isn’t so extremely different from the background level at the given time as it would have been if the climate hadn’t warmed. Under the quartic model, the 2010 value is “only” 2.89 standard deviations above the background level. That’s quite rare, but perhaps not so rare as to earn the adjective “exceedingly.” When we consider that the distribution of fluctuations is not normal, 2.89 standard deviations might be a notable but not implausible extreme — given the warmer background level observed in the 2000s.
There are other ways to estimate a nonlinear trend; just about any smoothing method can be used. My favorite is a modified lowess smooth, but if I do that some folks might accuse me of using some trick. They might call it some type of “highfalutin smoothing procedure” that “makes history irrelevant.”
So, I’ll also compute 30-year running means. That’s as basic as it gets, it sure can’t be called “highfalutin” (not with a straight face anyway). We’ll call it the “low-falutin'” smooth. Heck it ain’t even that smooth. Here it is:
Note that the most recent 30-year moving average is 0.56 deg.C warmer than any other before 1980. The present value (of the “trend,” i.e. the background level) is likely even higher now, because the moving averages don’t go past the year 1999.
Here’s the high-falutin’ smooth (in red), also on a 30-year time scale, which I’ll have the audacity to call the “non-linear trend” and covers the entire time span, together with the low-falutin’ running means (in blue)
It suggests even more strongly that the trend is non-linear, and that the value over the last decade or so — the background level — has been notably higher than before. The non-linear trend estimate is also similar to the quartic polynomial model which was selected by AIC:
Both models agree that the value now is notably higher than it was before.
In fact “notably higher recently than it was before” seems to be the common thread of non-linear trend models for these data. Let’s try the ultimate in such models, a simple step function. The model which fits best is that with a change starting in 1999, thus:
It too is statistically significant, and estimates the recent increase in temperature to be 2.38 deg.C. This can be considered statistical confirmation that the recent decade-and-a-half really has been warmer than those which came before it, and that the difference is enough to cause a notable increase in the chance of such an extreme as was seen in 2010.
For the coup de grace, this statistical confirmation of a higher recent background level doesn’t depend on the extreme record-breaking 2010 value — even if we omit that value from the analysis, the best-fit step-function model still takes its step in 1999, is still statistically significant (strongly), and indicates a background level 2.0 deg.C warmer than before.
Just for perspective, here are three nonlinear models to describe the trend in Moscow July temperature, the non-linear trend (in red), the quartic polynomial (in blue), and the step function (black):
All the non-linear models show strong recent warming. Furthermore, it’s not just inevitable wiggles from a smoothing method, it’s a statistically significant pattern according to the quartic polynomial and step-function models, but I suspect some mixture of the nonlinear trend and the step-function model is likely a better estimate of the genuine trend, i.e. the background level.
The recent warming makes quite a difference in the extremity of the 2010 heat wave. What was a 3.665-standard deviation extremity under the linear model is only 3.06 standard deviations above the mean in the non-linear model, only 2.89 according to the quartic model, and only 3.09 by the step-function model. In all three cases, the 2010 heat wave is seen to be very unlikely, but far less unlikely than if there had not been recent warming of the background level. Note that this conclusion pertains for the non-linear trends, the linear trend model makes 2010 a much more unlikely extreme. If we add the linear model to the previous graph (as a green line), and add error bars to the step-function model, we get this:
All non-linear models agree on higher post-1999 temperatures, but the linear model estimates a value for that time span which is clearly too low (note that it does the same thing for the pre-1890 temperatures).
One could argue that for all the nonlinear models, what happened early in the observed time span (say, prior to 1940) makes almost no difference at all to the estimated trend value in 2010, so for that estimation using a non-linear model “makes history irrelevant.” One would be right. And that is exactly as it should be. To get it right, one cannot ignore the non-linearity of the trend — by which I mean, the very background level we’re seeking to understand. As for history, the temperature in Moscow one million years ago just doesn’t help us estimate the background level in 2010, and frankly, neither does the temperature in 1940.
Given the statistical soundness of the nonlinear trends, and the failure of the linear model to capture the recent warming, I would say that if one wishes to know the background level of temperature in the Moscow region in July, in order to estimate how likely or unlikely the 2010 extreme was, it would behoove one to use a non-linear model.