A new paper by Fyfe et al. speaks with apparent certainty of a “slowdown” in the rise of global mean surface temperature (GMST). What it doesn’t give is any real evidence of it.
The thrust of the analysis is to compute trends of overlapping 15-year intervals, and compare them to trends of overlapping 30-year and 50-year intervals. The extent of statistical evidence for a slowdown seems to be limited to this:
“In all three observational datasets the most recent 15-year trend (ending in 2014) is lower than both the latest 30-year and 50-year trends. This divergence occurs at a time of rapid increase in greenhouse gases (GHGs). A warming slowdown is thus clear in observations; …”
The three observational data sets are from HadCRU, NOAA, and NASA.
There’s practically no actual analysis of the trends which are computed. For intance, there’s no mention of their estimated uncertainty. But such estimates are crucial to determining whether observed differences in trends over differen time intervals have any meaning beyond inevitable random fluctuation.
I computed the trend (by linear regression) of overlapping 15-year time intervals, and computed their uncertainty using a white-noise model (which underestimates the uncertainty, but not by much when using annual averages). Here are the trends for NASA data (GISTEMP) from Fyfe et al.:
The black line shows 15-year overlapping trends, the red line 30-year, and the blue line 50-year overlapping trends. The shading is plus to minus one standard deviation of the 15-year overlapping trends from the CMIP-5 simulations.
Here are the 15-year overlapping trends, but without the clutter of other results, and with a 95% confidence interval for the 15-year trends added as dashed lines:
The horizontal red line marks the average trend since 1970, about the time that the trend changed to its modern value according to change-point analysis.
Two things should be noted. First, since about 1975 all of the confidence intervals for 15-year trends include the since-1970 trend value, except for a single one for which the confidence interval is higher — not lower — than the since-1970 trend. That’s extremely powerful evidence against the presence of a “slowdown.” Second, that single extra-fast “speedup” excursion isn’t real evidence of a speedup, because so many intervals are tested; there are so many chances to exceed the 95% confidence limits for a single interval, that such an excursion is no surprise, in fact it’s to be expected.
Just that — the expected exceedance when you have so many possibilities to try, so many chances, that you can’t rely on the usual statistics — is the very reason that change-point analysis is the thing to do. It happens that in this particular case, even without allowing for the subtleties of change-point analysis, there’s still no evidence of recent slowdown, just the barest minor hint (albeit not really significant) of a tiny speedup.
Change-point analysis is not just the thing to do, it’s the thing that was done in Cahill et al.. They too studied multiple data sets and found no evidence of a slowdown (let alone a “pause” or “hiatus”) in any of them. A similar approach was executed in Foster & Abraham, who didn’t test multiple data sets but did apply more than just change-point anaysis, they applied a suite of statistical tests looking for evidence of that elusive slowdown. It couldn’t be found.
The most visually suggestive of their trend graphs is that for the HadCRU data:
Note the dip in the warming rate near the end; here’s the same thing but with a couple of extra years at the end and 95% confidence limits added:
Since about 2001 the confidence interval has dipped below the since-1970 rate. But, as said before, when a lot of different intervals are tested (as here) it’s really not a surprise if some of them exceed the usual confidence limits (hence the need for change-point anaysis). It’s not even a surprise that (using the extra data) in my graph the last seven 15-year intervals have trends below the since-1970 trend. They’re not independent, because two consecutive overlapping 15-year intervals share 14 years in common. If you want more visceral evidence, consider the trend rates for 15-year overlapping intervals here:
Note that there are multiple excursions (four of them) of the confidence intervals outside the true trend rate, and there’s even a stretch of 15 consecutive 15-year intervals in a row which are all on the same side of the true rate — which we know is zero, because these are artificial data from a random-number generator. It illustrates just how easy it is to get apparent evidence of a trend change when there is none — and I didn’t run a bunch of random simulations until I found one that did so, this was the very first (and only) such experiment, it happened “right out of the box.”
So why do they speak so confidently of a slowdown? It seems to be based on the fact that “the most recent 15-year trend (ending in 2014) is lower than both the latest 30-year and 50-year trends.” But that’s not really evidence; the most recent 15-year trend would have to be enough lower to be meaningful, my analysis says it isn’t (as does the published research by Cahill et al. and by Foster & Abraham), and to know whether it’s “enough” lower you must at the very least compute (and report) the associated uncertainties, which wasn’t done.
To sum up: I don’t find their evidence of the reality of a slowdown at all convincing — because I couldn’t really find their evidence.
That doesn’t mean there isn’t value in this paper, in fact I think there’s a great deal. They discuss such important issues as the nature of decadal variation, the influence of exogenous factors like volcanic eruptions and solar variations, and in particular ocean-atmosphere interactions. Understanding those are of great value, in fact I suspect they’ll be indispensible for furthering our ability to know what to expect in the future. They also highlight the nature of divergence of observed surface temperature from model trends, the understanding of which we can’t really do without.
They do point out (justifiably, I’d say) the flaw of Karl et al. and Rajaratnam et al. in using “since-1950” (rather than “since-1970”) as a benchmark for deciding whether the trend has changed recently. I’ve made the same criticism myself. Unfortunately, they don’t discuss the results of Cahill et al. or of Foster & Abraham, which I regard as a serious omission.
I do recommend a careful reading of this paper, it’s worth taking the time to digest it and incorporate it into our picture of the global climate and mankind’s influence on it. It’s a manifestation of how investigating the possibility of a slowdown is bound to increase our perspective, and ultimately our understanding, of what influences global temperature and how.
But when it comes to a “slowdown” in global surface temperature, to the actual evidence required to claim it with confidence, I’m still waiting.
If you like what you see, feel free to donate at Peaseblossom’s Closet.