The subject came up — yet again — whether or not there is a “pause” in global warming recently. Specifically:
Re looking at global warming over the last 15 years. I know it breaks all the “statistical” rules, but just simply eyeballing the graph of (annual average) temperatures shows a distinct change in the trend of global temperatures after 1998. In order to avoid a charge of “cherry picking” I could have said 10 years since 2002 – but to me the break in the trend is pretty clear.
We’ve often dealt with this subject, but since it’s so common, and seems to come from real skeptics as well as fake ones, we’ll address it once again. In depth.
Scientifically the best answer is to look at the other factors which are known to affect temperature and remove an estimate of their influence, better to isolate the trend from the fluctuations. This was done here. Frankly, it’s just plain impossible (really — it’s impossible) that these known factors do not affect global temperature. Their influence over the last decade was examined here. If you haven’t already read that post, it’s a good idea to do so.
But the best scientific answer isn’t always the most persuasive for the nonscientist. So for the moment, let’s ignore all that stuff, not try to account for other (known) factors, just look at the temperature data au naturel.
We’ll examine three global temperature records: GISS (from NASA) because I believe it does the best job estimating the Arctic (which is just about the fastest-warming region on earth); HadCRUT3v because the CRU record was specifically mentioned and unfortunately the new improved HadCRUT4 doesn’t yet go beyond 2010, and UAH because even though it doesn’t start until 1979, it’s a satellite record, and since it’s the work of Christy & Spencer at UAH nobody can even suspect that they’ve “doctored” the record to inflate global warming.
We’ll start with GISS. Here’s the whole thing:
Some things are easy to see and are actually correct — like the overall rise, and the consistent increase since 1975. But to many eyes there seems to be (at least possibly) a levelling off recently. This is more visually apparent if we just plot the post-1975 data:
Of course, there are a lot of ups and downs that nobody takes seriously as a trend reversal. The speedy decline from 1998 to 1999, for instance, covers about 0.4 deg.C, but it’s such a short-term fluctuation that everybody sees it for what it is — a short-term fluctuation. But that whole post-1998 stuff, is that one of those natural fluctuations that can so easily fool the eye, or is it a genuine sign of trend reversal?
Let’s see how the behavior pre-1998 really compares to that post-1998. We’ll use just the data from 1975 to 1998 to estimate the trend, then we’ll extrapolate that trend up to the present. Here’s the result (estimated trend in red, extrapolated trend as a dashed line in blue):
Now let’s use the data from 1998 to now to estimate the more recent trend, and see how it compares to the extrapolation (recent trend estimate also in red):
Interesting! The estimated trend rate post-1998 is less than the estimate for pre-1998, but then we already knew there’ll be random fluctuation. But the mean value of the post-1998 stuff is well above what the pre-1998 trend would have predicted. Even if the trend rate decreased, there’s no basis to say that it actually cooled, not even relative to what was expected from the pre-existing trend. In fact, if I estimate the trend for the entire time span 1975 to now, it has a higher warming rate than either the pre-1998 or post-1998 sections (solid blue line):
Ok — so it sure didn’t cool relative to what was expected. But did the trend rate actually decrease after 1998, or not? I honestly don’t know how to give major visual impact to the answer — I can just crunch the numbers. I did so, computing the trend 1975-now, 1975-1998, and 1998-now, as well as 95% confidence intervals for each. Here they are:
Answer: there is no evidence that the trend rate was any different after 1998 than before.
OK that’s GISS. How about CRU? Here’s the HadCRUT3v data since 1975:
Let’s do the same thing we did with GISS. Here’s the pre-1998 trend (in red) extrapolated to now (blue dashed):
And here’s the estimated post-1998 trend (also red):
The trend rate sure looks different — but looks can be deceiving. But notice that once again, the mean value isn’t lower than was predicted by extrapolating the trend, it’s higher (but not by a statistically significant amount). And once again, the trend over the whole time span is faster than either segment separately (solid blue):
In the mean, it sure didn’t cool off or even slow down. But what about the rates? Here they are:
Once again there’s no statistically significant disagreement. It’s “flirting” with significance, but not as much as it might look — remember there’s uncertainty in all three estimates. And don’t forget this: that CRU is known to underestimate the global trend — especially recently — because it leaves out the fastest-warming region on the planet, the Arctic.
Lest we neglect satellite data, here’s the record from UAH:
We’ll do the same thing again. Here’s the pre-1998 trend (red) extrapolated to the present (dashed blue):
Here’s the post-1998 trend superimposed (also red):
What a difference! The trend rates are about the same, but the mean value is a lot higher. Really — no cooling there. And again, the overall trend is faster than either subsection (solid blue):
As for comparing trend rates, here ya go:
No evidence of a difference.
What’s going on here? All three records share these properties:
- No significant difference in trend rate pre-1998 to post-1998
- Mean value post-1998 at least as high as predicted by extrapolating pre-1998 trend
- Higher rate 1975-now than either subsection separately
Why is that, really? It’s because the separation time, 1998, was chosen because it gives the visual impression of a change, and that’s because of the extreme warmth of 1998. And that’s because of the monster el Nino that year (one of those “known factors”). That makes it worth investigating, but it also means that we should expect lower trend rates both before and after — before, because that interval ends just prior to a high point, and after because that interval starts with a high point. A monster of a high point.
But seriously, can random fluctuations really create a 15-year time span with a negative (albeit not significantly so) trend estimate? I generated some artificial data to mimic the behavior of the CRU data. This is complicated because the data show strong autocorrelation, and it’s not even the simplest kind of autocorrelation (AR(1) noise), it’s a lot closer to ARMA(1,1) noise. So I ran up 100 years of ARMA(1,1) noise with the same autocorrelation as HadCRUT3v data, gave it the same standard deviation as HadCRUT3v data, and added the same trend as HadCRUT3v data. I didn’t do lots and lots of data sets until I got what I wanted, I just did a single run of 100 years. Lo and behold it had a time span with negative trend estimate, not just of 15 years, but 18 years long:
Yes. Random noise really can create 15-year (or longer) stretches with negative (albeit not significantly so) estimated trend rate.
I hope you’ve enjoyed the free meal, but now it’s time to pay for the soup, meaning, endure my little “lecture” (get the Star Trek reference?).
This analysis isn’t simple. Far from it. We compared what would have been expected to what happened. We estimated the uncertainty in estimates. Even that wasn’t simple — we had to compensate for autocorrelation, and not the simplest kind of that. We even briefly contemplated the reasons for the results, both statistical (how does splitting the data at a monster high point affect things?) and physical (what’s the impact of leaving out the fastest-warming region on earth?). But hey, if you really want to get at the truth, if you aspire to deeper understanding than can be had just scratching the surface, that’s what you have to do. It’s called “science.” It works, bitches.
It also takes work.
We didn’t just say “Hey look — it sure looks like the trend changed!” That’s a very natural thing to do. It’s an extremely persuasive argument with non-scientists! A vast amount of scientific experience has shown that it’s also a great way to get the wrong answer.
But for some people, it’s the only approach they’ll accept. That’s not because they couldn’t understand the science if it were properly explained in layman’s terms. It’s because they’re not willing. Do you really think that James Inhofe will invest the effort required to plumb this issue to its depth? Even if he did, do you really think he’d believe it? Or would he refuse to budge from “Hey look!”?