No Slowdown

A new paper by Fyfe et al. speaks with apparent certainty of a “slowdown” in the rise of global mean surface temperature (GMST). What it doesn’t give is any real evidence of it.


The thrust of the analysis is to compute trends of overlapping 15-year intervals, and compare them to trends of overlapping 30-year and 50-year intervals. The extent of statistical evidence for a slowdown seems to be limited to this:


“In all three observational datasets the most recent 15-year trend (ending in 2014) is lower than both the latest 30-year and 50-year trends. This divergence occurs at a time of rapid increase in greenhouse gases (GHGs). A warming slowdown is thus clear in observations; …”

The three observational data sets are from HadCRU, NOAA, and NASA.

There’s practically no actual analysis of the trends which are computed. For intance, there’s no mention of their estimated uncertainty. But such estimates are crucial to determining whether observed differences in trends over differen time intervals have any meaning beyond inevitable random fluctuation.

I computed the trend (by linear regression) of overlapping 15-year time intervals, and computed their uncertainty using a white-noise model (which underestimates the uncertainty, but not by much when using annual averages). Here are the trends for NASA data (GISTEMP) from Fyfe et al.:

Fyfe_nasa

The black line shows 15-year overlapping trends, the red line 30-year, and the blue line 50-year overlapping trends. The shading is plus to minus one standard deviation of the 15-year overlapping trends from the CMIP-5 simulations.

Here are the 15-year overlapping trends, but without the clutter of other results, and with a 95% confidence interval for the 15-year trends added as dashed lines:

nasa

The horizontal red line marks the average trend since 1970, about the time that the trend changed to its modern value according to change-point analysis.

Two things should be noted. First, since about 1975 all of the confidence intervals for 15-year trends include the since-1970 trend value, except for a single one for which the confidence interval is higher — not lower — than the since-1970 trend. That’s extremely powerful evidence against the presence of a “slowdown.” Second, that single extra-fast “speedup” excursion isn’t real evidence of a speedup, because so many intervals are tested; there are so many chances to exceed the 95% confidence limits for a single interval, that such an excursion is no surprise, in fact it’s to be expected.

Just that — the expected exceedance when you have so many possibilities to try, so many chances, that you can’t rely on the usual statistics — is the very reason that change-point analysis is the thing to do. It happens that in this particular case, even without allowing for the subtleties of change-point analysis, there’s still no evidence of recent slowdown, just the barest minor hint (albeit not really significant) of a tiny speedup.

Change-point analysis is not just the thing to do, it’s the thing that was done in Cahill et al.. They too studied multiple data sets and found no evidence of a slowdown (let alone a “pause” or “hiatus”) in any of them. A similar approach was executed in Foster & Abraham, who didn’t test multiple data sets but did apply more than just change-point anaysis, they applied a suite of statistical tests looking for evidence of that elusive slowdown. It couldn’t be found.

The most visually suggestive of their trend graphs is that for the HadCRU data:

Fyfe_cru

Note the dip in the warming rate near the end; here’s the same thing but with a couple of extra years at the end and 95% confidence limits added:

cru

Since about 2001 the confidence interval has dipped below the since-1970 rate. But, as said before, when a lot of different intervals are tested (as here) it’s really not a surprise if some of them exceed the usual confidence limits (hence the need for change-point anaysis). It’s not even a surprise that (using the extra data) in my graph the last seven 15-year intervals have trends below the since-1970 trend. They’re not independent, because two consecutive overlapping 15-year intervals share 14 years in common. If you want more visceral evidence, consider the trend rates for 15-year overlapping intervals here:

random

Note that there are multiple excursions (four of them) of the confidence intervals outside the true trend rate, and there’s even a stretch of 15 consecutive 15-year intervals in a row which are all on the same side of the true rate — which we know is zero, because these are artificial data from a random-number generator. It illustrates just how easy it is to get apparent evidence of a trend change when there is none — and I didn’t run a bunch of random simulations until I found one that did so, this was the very first (and only) such experiment, it happened “right out of the box.”

So why do they speak so confidently of a slowdown? It seems to be based on the fact that “the most recent 15-year trend (ending in 2014) is lower than both the latest 30-year and 50-year trends.” But that’s not really evidence; the most recent 15-year trend would have to be enough lower to be meaningful, my analysis says it isn’t (as does the published research by Cahill et al. and by Foster & Abraham), and to know whether it’s “enough” lower you must at the very least compute (and report) the associated uncertainties, which wasn’t done.

To sum up: I don’t find their evidence of the reality of a slowdown at all convincing — because I couldn’t really find their evidence.

That doesn’t mean there isn’t value in this paper, in fact I think there’s a great deal. They discuss such important issues as the nature of decadal variation, the influence of exogenous factors like volcanic eruptions and solar variations, and in particular ocean-atmosphere interactions. Understanding those are of great value, in fact I suspect they’ll be indispensible for furthering our ability to know what to expect in the future. They also highlight the nature of divergence of observed surface temperature from model trends, the understanding of which we can’t really do without.

They do point out (justifiably, I’d say) the flaw of Karl et al. and Rajaratnam et al. in using “since-1950” (rather than “since-1970”) as a benchmark for deciding whether the trend has changed recently. I’ve made the same criticism myself. Unfortunately, they don’t discuss the results of Cahill et al. or of Foster & Abraham, which I regard as a serious omission.

I do recommend a careful reading of this paper, it’s worth taking the time to digest it and incorporate it into our picture of the global climate and mankind’s influence on it. It’s a manifestation of how investigating the possibility of a slowdown is bound to increase our perspective, and ultimately our understanding, of what influences global temperature and how.

But when it comes to a “slowdown” in global surface temperature, to the actual evidence required to claim it with confidence, I’m still waiting.


If you like what you see, feel free to donate at Peaseblossom’s Closet.

75 responses to “No Slowdown

  1. Thanks Tamino – thats a great demonstration of the potential flaws in their method – it does seem very strange that they would invent a new method to test for a slowdown, when there exists a perfectly good method for doing just that – i.e. change-point analyses.

    Speaking of which – I have created an R-Shiny app to play with Bayesian change-point models (or IC based approximations thereof). Its preloaded with several annual global temp. series, but any data can be uploaded (don’t try 2 change-points on a very long time series!!). I would welcome feedback…

    https://tanytarsus.shinyapps.io/changepoint/

    • Thanks and very nice.

      Only comment is that probability of a changepoint ought to be part of the output state of the calculation. Stick-breaking priors might be used on the range of years with the number of breaks itself being governed by a Poisson hyperparameter. I’m working on something like this, but need to finish digesting Fyfe, et al.

      In my case, the idea is to use a state-space level + trend model, put process noise on the trend, and model that process noise itself as an interior random walk. I’m admitting the possibility of a change at each year.

      • Thanks hypergeometric,
        not sure exactly what you mean by “part of the output state of the calculation?” The post prob of a change point is calc’d as 1-Pr(no changepoints)=1- Pr(constant-slope model). This is roughly equivalent to putting a binomial prior on the number of change-points (its exactly equivalent when only 1 cp is allowed, but its a bit more complicated with 2 because of the min. segment length restriction – which is mostly there to speed things up). When I do this ‘properly’ in WinBUGS I do use a binomial prior on the number of changepoints, k~Binomial(Kmax, 0.5), where Kmax is the maximum no. allowed (its interesting how little difference increasing Kmax beyond 3 makes when using global temp data). k~Poisson() would work too I think (but might be slow, as without a hard limit on k there would be an enormous model space to explore).

    • (Sorry, ran out of nested depth to reply. This is in response to the comment at March 1, 2016 at 10:53 pm.)

      Like it or not, there could be a changepoint at each and every one of N time points, although I’m not sure the problem could be posed to be able to meaningfully discriminate that from N-1 or N-2 or … or N/2 change points. Accordingly, I’d say if there are M change points, there are up to M+1 regimes having their own trends and levels. A changepoint is a transition from the present regime to another one, admitting it could be one already visited. Since you know WinBUGS, in Plummer’s JAGS there’s a “dcat” categorical function which accepts a vector of probabilities each of which corresponds to a mutually exclusive category. This is often used to elect one of a number of means for a distribution, and this is the kind of choice among regimes I had in mind.

      I am working this, although since I do not do it full time, it will take a while to finish up my efforts, write it up, and polish the code and such so others can use and critique it. I’ll illustrate what I mean then.

      • By the way, this is done by Dobigeon, Tourneret, and Scargle in a different application involving counts at DOI:10.1109/SSP.2005.1628623 and DOI: 10.1109/TSP.2006.885768. There is an extension to state-space modeling which I am working on and hope to have in arXiv soon.

  2. This “hiatus” stuff was a bit in my wheelhouse a bit back. Alas, I can’t get to NATURE, and don’t subscribe. I’m hoping they can explain the dramatic differences in variability between the HadCRUT4 ensembles and the climate model ensembles which were shown in the Fyfe, Gillett, and Zwiers paper a bit back.

    There’s also related commentary by co-author Ed Hawkins (of subject paper) at
    http://www.climate-lab-book.ac.uk/2016/making-sense/

  3. It has long been my observation that physicists don’t deal well with real randomness. The need to explain every apparent “wiggle” is very high. In most respects, this is a good thing. However, some “wiggles” are just wiggles in a random world. Or even a deterministic but chaotic world at fine resolution. Apparent to observation, but not truly predictable. “Explaining” them causally will never work. Even in principle.

    Before anyone brings up statistical mechanics or quantum mechanics, imagine a world where there really were only a very few such particles, i.e., too few to allow any elegant order to emerge. In such a world, these fields would look very different.

    There is surely some variation that can be explained causally in the system which has not yet been explained. But there is other variation that cannot be given only a single particle–Earth–to examine. This study, to me, appears to fall more under attempting to “explain” the latter variation rather than the former.

    But then I am not a physicist.

  4. As usual, very clear for this layman. Thanks.

  5. Unfortunately, $18 is a bit steep for me to take the time to digest it and incorporate it into my picture of the global climate. I would like to see less superficial discussions of surface temperature in the context of global warming. In my picture of the global climate, global warming is an increase of total energy in the climate system. In contrast to a slow down, I see an acceleration of global warming.

    The atmosphere in the lowest few meters is the location of an extremely small part of the total climate system energy, and the amount of energy there (which can be inferred from observed temperatures at 2 meters above the surface) varies significantly without any variation in the total energy. Attempting to infer TOTAL climate system energy from surface temperature data seems to be intrinsically difficult. I mean, the surface temperature is really important to me personally (it’s where I live), but it is really an insignificant part of the total system dynamics, being more a small effect than a significant cause.

    • uilyam,
      While surface temperature is a small part of the total climate energy system and its average wobbles around a bit, over the last half century it does provide a reasonable proxy for 0-700m OHC with a ratio of roughly +1ºC of HadCRUT4 per +200Zj of 0-700m OHC. This shouldn’t be too much of a surprise given surface temperature is two-thirds SST data.

  6. @ jgnfld

    However, some “wiggles” are just wiggles in a random world. Or even a deterministic but chaotic world at fine resolution. Apparent to observation, but not truly predictable. “Explaining” them causally will never work. Even in principle.

    Yes, you come across this with the 911 conspiracy theorists – once you explain gravitational force and potential energy etc to them they will often gish gallop to the fact that one of the hijackers passports was found in the street!!

    Obviously difficult to “explain” other than simply a random event in a chaotic and unprecedented series of events – a “wiggle”

    so actually, as you say better left as a “wiggle”

    In fact it would be odd if that day had no “unexplainable” events or “inconstancies”

    It seems the human brain is hardwired to see patterns, it does not like randomness

    (apparently physics explains quite well why paper objects survive explosions – something to do with mass versus surface area)

  7. Hi Tamino,

    This is very interesting but it seems Fyfe et al. and you are concerned about different things. You are concerned about whether or not there had been a ‘statistically significant’ change in the GMST trend while Fyfe et al. is concerned with explaining a change in the trend regardless of whether or not it can be considered ‘statistically significant’. To change the example, there has been a great deal of GMST warming from 2011-2016. You may say that this change in warming is not ‘statistically significant’ but that doesn’t mean that it didn’t happen. I think Fyfe et al. would say that the fast pace of warming from 2011-2016 is worth studying even if it’s not statistically significant because the ‘noise’ in this case is a real physical phenomena (transitioning into an El Nino state).

    Same thing goes for ~1998-2013. It is possible for this time period to be thought of as statical noise while at the same time realizing that there might be real physical reasons for the reduction in the 15 year trend.

    [Response: I quite agree. I think a lot of the disagreement stems from exactly what one means when one says “slowdown.” I also see tremendous value in identifying the nature and possible causes of such fluctuations. Unfortunately, deniers make it harder to have a productive discussion about this because they exploit the very fluctuations that we’re interested in understanding.

    I was thinking of doing another post, to emphasize this very distinction. Now I’m even more motivated to do so.]

    • Well, there was a lot of warming in that period before the El Nino started, which appears to have been caused by the “blob” in the North Pacific, which sent the PDO index positive.

    • A post on this would be very welcome. There is true noise. Measurement errors. Instrument biases. Thermal noise. Analysis uncertainties. But none of that explains why 2008 was colder than 2005 or 20015. The average surface temperature of the planet was truly colder in 2008. Winter is colder than summer. Cold spells, droughts, and floods are real.

      The word “noise” is used in many different ways and one person’s signal is another person’s noise. The issue for the climate system is timescale. Weekly, monthly, and decadal variability are real but don’t say much about long-term warming. In the end, I think using the word noise causes more confusion than clarity.

    • “It seems Fyfe et al. and you are concerned about different things.”

      The new paper addresses the confusion about what exactly is meant by “hiatus”:

      Recent claims by Lewandowsky et al. that scientists “turned a routine fluctuation into a problem for science” and that “there is no evidence that identifies the recent period as unique or particularly unusual”26 were made in the context of an examiniation of whether warming has ceased, stopped or paused. We do not believe that warming has ceased, but we consider the slowdown to be a recent and visible example of a basic science question that has been studied for at least twenty years: what are the signatures of (and the interactions between) internal decadal variability and the responses to external forcings, such as increasing GHGs or aerosols from volcanic eruptions?

      The Lewandowsky et al. reference was the subject of a RealClimate post last November: Hiatus or Bye-atus? by its authors:

      To date, research on the “pause” has addressed at least 4 distinct questions:

      Is there a “pause” or “hiatus” in warming?
      Has warming slowed compared to the long-term warming trend?
      Has warming lagged behind model-derived expectations?
      What physical mechanisms underlie the “hiatus”?

      Those questions are not only conceptually distinct, they also involve different aspects of the data and entail different statistical hypotheses. Nonetheless, those questions have frequently been conflated in the literature, and by using a single blanket term such as “pause” or “hiatus” for distinctly different phenomena and research questions, unnecessary confusion has resulted.

      [Response: I quite disagree. Everybody agrees there’s no “cease” or “stop” or “pause” — but Fyfe et al. claim a “slowdown,” yet they don’t provide evidence of it.

      Decadal variability is indeed an issue worth considering, but if one wishes to start that discussion by claiming a “slowdown,” one had better provide actual evidence of a slowdown.]

      • There are many ways of trying to assess trends in temperatures based upon observational data. Tamino has done this over and over again, with many datasets, and using many techniques. My way, best illustration in Figures 13 and 14 of https://johncarlosbaez.wordpress.com/2014/06/05/warming-slowdown-part-2/ involves a random walk with Gaussian innovations. Sure, while the variance of the innovation is chosen, hopefully in a sensible way, the net is that the time rate of change of temperature is (very) strictly positive, even if the second derivative bops around a bit.

        Sure, these are observation-only conclusions. But they oughtn’t be summarily dismissed in consequence, given their success in econometrics, economics, and ecology, per http://www.pnas.org/content/110/13/5253.abstract (despite critics, http://arxiv.org/pdf/1305.3544v1.pdf).

        Moreover, despite not reading the latest Fyfe, et al paper (which shields itself from broad criticism by hiding behind a paywall), there is a certain amount of confusion, it seems, regarding what, precisely, expectations are for warming in any time interval. Does it include all possible realizations of futures, as climate models apparently do? Or doesn’t it? And if it doesn’t, what predictive mechanism, better than what Tamino offers, or my state-space approach, do these expectations draw upon?

      • Tamino, I failed to make it clear I wasn’t agreeing with Fyfe et al.’s conclusions, but with your previous comment: “I think a lot of the disagreement stems from exactly what one means when one says ‘slowdown’. I also see tremendous value in identifying the nature and possible causes of such fluctuations.” Lewandowsky et al.’s RC post distinguishes four questions to ask about any alleged slowdown. The first two call for a statistical approach, as you and Fyfe et al. have done. The second two are about how forcings physically interact to cause observed short-term, e.g. decadal, variation.

        AGW-deniers were easily shown to be wrong about a statistically-identifiable slowdown, but they got more traction with the claim that model ensemble projections were for a stronger warming between 1998 and 2014 than observations showed. The 2014 Nature Geoscience commentary by Schmidt et al., Reconciling warming trends offered a sound explanation for the discrepancy: “Conspiring factors of [modeling] errors in volcanic and solar inputs, representations of aerosols, and El Niño evolution”.

        While it turned out the alleged “hiatus” didn’t exist statistically, interest in it led to resolving some “noise” into forcings, and to improvements in coupled GCMs. That indeed has tremendous value, not least by showing the public that climate science is self-correcting and progressive, like all science.

        [Response: Alas, the scientific power of progress and self-correction so illustrated is lost on almost all of the public, in large part because it’s twisted by propagandists. All they hear is “the scientist were wrong.”]

    • Response: I quite agree. I think a lot of the disagreement stems from exactly what one means when one says “slowdown.” I also see tremendous value in identifying the nature and possible causes of such fluctuations. Unfortunately, deniers make it harder to have a productive discussion about this because they exploit the very fluctuations that we’re interested in understanding.

      I was thinking of doing another post, to emphasize this very distinction. Now I’m even more motivated to do so.

      Maybe I already wrote that post: How can the pause be both ‘false’ and caused by something?

  8. It’s not even a surprise that (using the extra data) in my graph the last seven 15-year intervals have trends below the since-1970 trend. They’re not independent, because two consecutive overlapping 15-year intervals share 14 years in common.

    Is there a formal way to account for that? I know you said that annual averages don’t show much autocorrelation, but presumably the autocorrelation of the overlapping trends themselves can be estimated; is there some way of incorporating that information into the error bars?

    • I was curious about the CI estimation procedure here as well.

    • I started wondering about the correlations between the trend estimates, so I did some work on this front. There are fairly simple formulas to update the estimate of the slope of a linear regression if data points are added to or removed from the model. In the case here, where we have time series data and we are removing one point from the beginning of the regression and adding another point at the end of the regression, the formula becomes particularly straight forward (assuming I haven’t made any algebra errors).

      To compute the change, we need the slope for the interval, the average temperature for the interval, the old data point (to be removed from the beginning), and the new data point (added at the end). If the trend is in degrees per year, the change in slope is
      [ 6(n+1) old + 6(n-1) new – 12n average ] / [ (n-1)n(n_1) ]
      where n is the number of data points used to compute the trend (in this case, 15).

      To compute the changes in the trend across all of the time intervals, we need to update the average temperature at each interval as well, of course.

      Under the white noise assumption (the random variability in the temperature is uncorrelated between years and the standard deviation of the random variability is constant), we can use the update formula to find that the correlation between consecutive years is [n-3] / n, which is quite large.

      On average the following year’s trend will move back toward the long term trend. Under the assumption that the noise is Gaussian, the approximate probability that if the confidence interval barely excludes the long term trend in one year, it will also exclude the long term trend the following year, is 25%.

  9. There was a ~15 year interval between the great EI Ninos of 1982 and 1997. i suggest that if the 2015 El Nino had developed in 2012, the 15 year over lapping trends would have shown a steady trend up.

    However, the current El Nino, took its time, and was born 3 years late as “Sr. Gordo”.

    I think it would have been a better paper if they had used 18 year over lapping periods and published it in 2017

  10. There’s something I’d like to understand a bit better, Tamino. The error bars in the 15-year trends in the charts you gave – those errors come from the measurement errors, the linear regression model, or both?

    [Response: From the regression model.]

    I guess I’m not sure how you’re using a “white noise model” to calculate the uncertainty, or what noise the white noise model is supposed to model.

    [Response: I’m just using it to demonstrate that even then (the model with the *least *persistent fluctuations), the process of trends of overlapping 15-year spans too easily leads to appearances of meaningful deviations.]

    Do you expect this would change much if you didn’t use a linear regression model, but some other model? (e.g., with autocorrelation).

    [Response: Autocorrelation would make it even more likely to be fooled by appearances, concluding some trend change when it isn’t justified. But the autocorrelation of annual averages is small.]

  11. One way to reduce the uncertainty about short term trends is to reduce the uncertainty about the intercept…If you repeat Fyfe et al’s procedure but make each 15 year trend start at the end of a trend from 1970 to the start year (i.e. a 2-segment regression with one segment from 1970 to t and the other from t to t+14 years), you get something like this…

    Using continuous trends (rather than effectively allowing sudden jumps in temp just prior to each 15 year period) stops the 1998 El Nino having such a large downward influence on trends starting around then..which seems desirable to me, if the trend you are interested in is the underlying rate of warming.

  12. Good post and I tend to agree. FWIW, I made very similar points on missing confidence intervals and rolling intervals in critiquing a Cato Institute study from last year. Interested readers can find the R code and accompanying output here: https://github.com/grantmcdermott/cmip5-models

    Lastly, on the subject of change point analysis, trend breaks, etc. this 2013 Nature Geoscience paper (and the accompanying commentary piece) are well worth a read: http://www.nature.com/ngeo/journal/v6/n12/full/ngeo1999.html
    http://www.nature.com/ngeo/journal/v6/n12/full/ngeo2015.html (commentary)

  13. I have assumed that the whole slowdown/hiatus/pause discussion was caused by the single data point for 1997/1998. Without that one anomaly, would Fyfe et al have been written?

  14. I have a friend who likes to compute the trend line for the moving trend slopes. Using data from the NASA-GISS ‘GLB.Ts+dSST.txt’ file with the 30-year moving trend from 1880 to the present, he concluded that warming is accelerating at 1.66+-0.1 K per century per century. https://www.facebook.com/photo.php?fbid=1087319837987530&set=p.1087319837987530&type=3&theater

  15. Still, what we can see on your GISS graph is a warmer trend in 1940 that in the 2000’s. Fyfe is only talking about a slowdown, that we can see on YOUR graph also, despite what you’re saying. Thought I think your 95 percentile precision is interresting. You should not think recognizing a slowdown is giving credit to sceptics, it’s just a better understanding of natural variability. Indeed now we’re observing an acceleration of global warming.

    [Response: “we can see” is one of the most pervasive traps in statistics. Either “slowdown” passes statistical tests (which it does NOT), or you have to stretch the definition of “slowdown” to include every single year which bucks the trend, compared to the previous year.

    Was there a “slowdown” from 2003 to 2004 followed by a “speedup” from 2004 to 2005? “Slowdown” from 2007 to 2008, followed by “speedup” from 2008 to 2009? It sounds to me like that’s what your perspective requires.

    As for acceleration in global warming, I won’t claim that until the statistics are in to support that idea either. So far, they’re not.]

    • Johan…The “we can see” crowd NEVER takes into account the power to see. jimt discusses this from a Bayesian approach, however power analyses get at the same issue. I threw together a quick such analysis that I think is debugged and here are the results:

      The underlying trend in the annual Jan-Dec GISS data 1970-2015 is .01736 degrees/yr and the standard error of the residuals is .09095 degrees (from the R lm function). Let’s assume this is correct. Now, what are the odds of picking up this real trend in any short period of years? Adding random (white) noise with mean 0 and sd .09095 to the trend line repeated 50,000 times (giving results stable to 1% or better) shows the following:
      For 10 years, the real trend–remember it is defined into the numbers so it really is there–is actually identified (i.e., significant regression) only 33% of the time. Continuing…
      12 years, 54% of the time
      14 years, 75% of the time
      16 years, 91% of the time
      17 years, 95% of the time
      18 years, 98% of the time
      In other words, the trend could “really” be there, but will be “seen” as “flattened” or “slowed” simply by chance a large percentage of the time until we start examining sequences of more than about 17 years. Trying to “explain” this “slowing” in shorter time periods can easily be a fool’s errand and be indeed meaningless.

      Throw in cherrypicking, btw, and you can increase the “seeing” of false “flats/slowdowns” dramatically somewhere along the line in a long series of years, but that’s a somewhat different issue (though one worth exploring as well).

      • As Tamino with is change point analysis argument, you bring an interesting angle with this power analysis. I understand what you mean, and I think that’s about the same idea Met Office brought a few months ago, concluding that for a hiatus to be real, the slower trend would have to be much longer that 15 years. This type of slowdown may happen again.
        However, I think Fyfe’s study takes the story the other way round : imagine you’re working on forecasting temperatures…. You have to detect what types of natural variation is able to affect temperatures. The aim is not to evaluate climate sensibility in general, it is to be able to predict climate evolution related to ACTUAL state of the climate. That’s what Met Office does with 5 years forecasts (not the same stuff I was talking about at the beginning), based on actual state of the climate, while climate models used in the last IPCC report didn’t follow such small cyclical variations.

      • Yes, given the size of the noise and the trend, you will need at least 17 years if you assume to know the real trend and 23 years if you don’t know the real trend. Short trend calculation over only 15 years should not trusted.

  16. When you say a “slowdown from 2003 to 2004” is followed by a “speedup” from 2004 to 2005… You’re probably right in a way. Yet I don’t find it meaningless to try to explain those small variations… But honestly, to me, the most convincing part of your article is the 95% confidence interval not outside the range of the 1970-present trend. That’s a point.
    On the other side, Fyfe is just saying the slowdown is outside one standard deviation, which is, maybe ? the definition of a small slowdown… not a hiatus.

  17. From a Bayesian or information theoretic perspective, to find any positive support in the data for a change in trend since 1970 you have to 1) remove 2015 and 2) use a vey narrow prior , centred around 2005, for the possible timing of the trend change.
    Using HADCRUT up to 2014, a 2-slope model with change in trend in 2005 has the highest support (based on AIC), and is about twice as likely as the constant trend model. When you add 2015 data the weight of evidence flips so that the linear model is twice as likely as the “slow-down since 2005” model. In both cases support for the linear model increases as the prior uncertainty about the timing of trend change increases: a completely flat prior over the full period 1970 to 2015 (i.e. the most objective prior) fails to find positive evidence for a non-linear trend, even if 2015 is excluded.

    So if someone had a genuine prior expectation (i.e. based on something other than looking at the data!!) that there may have been a change in trend in the last 20 years, HADCRUT data up to 2014 would not have shifted their belief either way (Bayes factor ~1), although any slowdown was small:

    But last years data should have them now leaning toward no change:

    And if someone thought (for whatever reason) warming stopped, or even slowed down, in 1998, the data would strongly suggest otherwise …

    (the salmon bar is the prior prob. of a change in 1998, the black bar is the posterior prob.)

    • @jimt: How sensitive is it to the chose of the start year 1970?

      The trend (before the (possible, but unlikely) trend chance) may be overestimated due to ENSO and volcanic variability. See my comment below.

      • Uli,
        The results re. recent trend changes are not much affected by choice of start year (if you start pre. 1970 you need 2 changepoints, but starting any time from 1970 gives much the same result – i.e. almost no evidence of a change in trend). Try yourself here https://tanytarsus.shinyapps.io/changepoint/
        (it may take a few seconds to load up fully…and it doesn’t seem to work on mobile devices)

        The sensitivity to end points is much reduced by using continuous trends – i.e. making the 2 trend lines meet up at wherever the putative change in trend occurs. Theres a big difference between comparing trends before and after 1998 (say) like this:


        versus like this:

        The latter is more sensible IMHO.

      • @jimt: Thanks!
        It seems to make not so much difference, if there is no jump.
        But I still think, that the long term trend is a little bit overestimated if you start it in this years. The years from 1964 to 1976 are most below the long term trend because of the 1963 Agung eruption and the La Nina period in the 70ies.

      • Chris O'Neill

        The years from 1964 to 1976 are most below the long term trend because of the 1963 Agung eruption and the La Nina period in the 70ies.

        The 1950s were pretty cool too.

  18. The extent of statistical evidence for a slowdown seems to be limited to this:

    Actually, they also say this:

    Using this more physically interpretable 1972–2001 baseline, we find that the surface warming from 2001 to 2014 is significantly smaller than the baseline warming rate.

    But unfortunately, they don’t explain how they arrived at that conclusion. I tried to ask over at Ed Hawkins’ blog, but got an error message when I tried to post. So I took a quick look at the data myself, though only HadCRUT4 so far. A Chow test on the annual data doesn’t find a statistically significant changepoint at 2001; it comes fairly close to significance, but only when I leave out the 2015 data. With that, it’s nowhere near. I also tried monthly data, using a Monte Carlo simulation to estimate significance in the presence of auto-correlation; same results, no changepoint in sight.

    Absent an explanation of how the significance of the trend difference was derived, I’m finding it very hard to believe.

  19. The argument should start at first principles. What’s happening at the top-of-atmosphere? If there is no change in the nature of the heat imbalance there compared to pre-Industrial CO2 concentration, then the planet is simply warming at a rate consistent with the trajectory of CO2 in the atmosphere.

    And if the planet is warming at such a consistent rate, then any variability in surface temperature records is just that – a reflection of physical and measurement variabilities, with all the statistical implications that Tamino has tirelessly reiterated for many years. In this light any “pause” or “slow down” requires extraordinary evidence beyond a simple claim of such, because without a solid physical basis they’re sooner or later going to be balanced by “accelerations”, “jumps” and other such, and all of which will in hindsight simply be manifestations of the range of climatic phenomena that are happening under the ToA.

    And in greater hindsight the pause/slowdown chatterings will all have been seen to be a huge, collective lesson in fiddling led by ideological Neros, whilst Rome and the rest of the planet burns.

  20. It is disingenuous to state that there has been a slow-down, because a slow-down is only meaningful if considered over an interval. To have any meaning, the interval must be statistically meaningful. Anything less than 30 years is not meaningful in terms of climate.

    Fyfe et al. should state that they are interested in internal variability (e.g. noise) rather than trend. There is a long and venerable history of studying noise. Embrace that tradition, but FFS at least have the perspicacity to realize that what you are studying is noise.

  21. When you say “there is a slowdown in surface warming” you have a problem without knowing it, because there is no such thing as “warming”. Everybody speaks about “warming” supposing, that everybody else is referring to the same thing, but this is not the case.
    This is the curse of language. Language, especially elegant language, is always an abbreviation, and would be cumbersome and unwieldy otherwise, but the more you abbreviate, the more diffuse you get, and maybe end up in nonsense.
    What there is, what is existing, is e.g. a warming between two consecutive years, warming between consecutive 5-year-periods and so forth. What does exist is the rate obtained by a linear fit of consecutive (or overlapping, for that matter) 15-year-periods, 30-year-periods, &pp. That is something you can talk about.
    But not about “warming”.
    And then, of course, there is the question of the predictive value of your such statements about the past. There isn’t any. All those painstaking measurements and trend calculations are worthless when it comes to predicting. To make a prediction, you need a model, an additional assumption. And if it is only like “My system is behaving in such a benign way that it doesn’t change trends faster than (put value in)” . We have much more sophisticated models. Now we can take our measurements and make plausibility tests of our models for the past. And pray. (I’m not religious, btw.)

    • Seriously?

      “Warming” is a code word for the planetary response to top-of-atmosphere radiative forcing.

      “All … trend calculations are worthless when it comes to predicting.” Nope. They are, in fact, models, albeit of a specific kind. For example, an autoregressive model would posit that near term future trends can be completely predicted using the observations in the recent past. (I’m not saying this is sufficient for temperature, simply that is it a kind of model.)

      Models are not assumptions. Many climate models are careful combinations of experimentally verified submodels. Yes, they can be tested by hindcasting, but their predictive value comes from repeated verification against a developing future (with proper criteria) to ascertain predictive skill. Ultimately they are based upon an assumption (if you want to call it that) that logic and maths work.

  22. As McIntyre and McKittrick helpfully pointed out, noise in the climate system is red (thanks, guys!!!). I think that someone with serious math chops could use M&M’s mathological audiot-ing techniques to conclusively show that the alleged downturn is a statistical artifact due merely to (cherry) red noise. I understand that M&M emphasized the importance of centering, and if one centers trends on 1998 – http://www.woodfortrees.org/plot/gistemp/from:1980/plot/gistemp/from:1980/trend/plot/gistemp/from:1988/to:2008/trend/plot/gistemp/from:1990/to:2006/trend/plot/gistemp/last:120/trend – there’s not a lot of difference, although the shorter trends are higher, comparable to the most recent 10 year trend.

    • There are many possible kinds of “red noise”, and the mere characterization of residuals as “being red” does not help much. It’s not even deep. A process with constant energy density at all frequencies is impossible to physically realize, so all real noise anywhere much necessarily be band limited and, so, red.

      Moreover, simply because “white noise” is am imperfect characterization does not mean it is useless. Indeed, within a band where there is noise, as long as energy is uniformly distributed within it, it’s a great model. Energy at arbitrarily high frequencies contributes to a temporal feature in an amount which decays with its distance from the baseband. Energy at arbitrarily low frequencies is indistinguishable from DC.

  23. Add approximate 95% confidence limits. The “slowdown” looks even more like noise, and not much of that.

    Plot the post 1970 trend and the post 1998 trend together and it becomes apparant that the deviation from the long term trend is not significant.

  24. Did you pass your analysis by any of the authors, Tamino? Michael Mann seems to be trying to justify it (well, a tiny bit). Says something about a “running window” and calls the analysis robust.

  25. hmmmmm, popcorn …
    ——————
    Tony ‏@TonyPrep Feb 26
    @MichaelEMann @gayathriv @EEPublishing Tamino likes the paper but shows that it’s wrong on the slowdown: https://tamino.wordpress.com/2016/02/25/no-slowdown/

    Michael E. Mann ‏@MichaelEMann Feb 26 State College, PA
    @TonyPrep @gayathriv @EEPublishing Tamino is entitled to his views. I think the paper makes compelling case though. Slowdown ended in 2012..
    ——————

    • If it ended in 2012 it was an 11 yr slowdown at most (I’d say 7). Can’t help agreeing with Lewandansky et al

  26. Sheldon Walker

    The Slowdown shines brightly, like a light in the dark.

    If you can’t see it, then you either have your eyes closed, or you are facing in the wrong direction.

    Here is my first attempt to find the Slowdown using an educated guess.

    Very strong graphical evidence for the Pause. (Part 2)

    I have since developed a much more accurate method, and will publish soon. The accurate method showed that my educated guess was very accurate.

    [Response: You are a fool.

    Time was, I’d have shown you the folly of your ways (not that you’d have learned from it). Now, you and your ilk are no longer relevant.]

  27. There may be the appearance of a slow down also, if the slope of the reference period is higher due some reason. For example you may start in a La Nina period (f.e. the 1970ies) and end in a El Nino period (f.e. the 1990ies). Or you start the reference slope calculation after a large volcanic eruption and end after the recovery from a volcanic eruption. F.e. if you would calculate the trend starting in 1982 or so and end around after 2000, you would include the recovery from the 1982 and 1991 volcanic eruptions and so overestimate the trend in this period.

  28. Martin Smith

    I have come to believe that the “pause” claim is nothing but a red herring. It is used to continue giving the impression to the general population that the science is not clear. But there have been 5 “pauses” since 1970, as this now famous graph shows: http://www.skepticalscience.com/graphics.php?g=47

    The causes of the pauses have been discussed. The Proponents of Pause ignore the possible causes, and they ignore the other pauses. It sounds so much like Dr Suess, I wanted to call this the Suess Effect, but it turns out there already is a Suess Effect, and it has something to do with CO2: https://en.wikipedia.org/wiki/Suess_effect

  29. “Explaining the wiggles is just as important as the overall underlying trend.”
    –Ed Hawkins at his blog:
    http://www.climate-lab-book.ac.uk/2016/making-sense/#comment-2014

    Let me put this in simple words:

    “Yes, we’re on our way to Hell in a handbasket, but the steering on our handbasket is wobbly and as a result we are not always aimed _precisely_ in the direction of Hell; the destination toward which we’re steering ourselves is inarguable, but hey, look, there are little wiggles back and forth that are fascinating to observe and demand our attention to the precise ….. oh, wait.”

  30. I grew up as a biology faculty brat and heard a lot of science listening under the table or behind the furniture as a little kid, and I understand that within the area of a scientist’s expertise, nature is fascinating and beautiful even when she’s trying to kill you or your entire species.

    What we’re seeing _is_ nature — and so, is fascinating.

    Nature poked and prodded and thrashing, the ‘angry beast’ aroused.

    If this were simply a case of simple damage from human malicious stupidity with no feedbacks, nothing happening revealing in exquisite new detail how nature works, the scientists would not find the details fascinating.

    Poisoning is deplorable. Bioaccumulation is deplorable_and_interesting.

    How did all those toxic heavy metals get into the coal beds, eh?

    I recall, decades ago, some student reacting with shock-horror-pearl-clutching to a lecturer who said something about how viruses work and described one of the mechanisms as “beautiful” — the kid was having trouble understanding how anything about smallpox could be beautiful.

  31. So I’m sitting in a stiflingly hot room on the second day of above-average early March temperatures that are more like the height of summer than the commencement of a southern hemisphere autumn, and I wondered if there’s been any systematic analysis of whether the nature of short-term (say, ~ a week) excursions from a locale’s climatological means has been investigated? There have of course always been unseasonal days and weeks, but is there a substantive difference in the numerical and physical presentations of such these days compared to, say, the middle of the 20th century?

    My fruit trees are definitely saying so, as are my paddocks and the resident wildlife, but I’m wondering if there’s been a systematic analysis of the character of “unseasonal” weather events, separate from the analyses of longer periods that reflect changes in climate.

    If anyone has any insight into this I’d appreciate a pointer.

  32. Hmm, Tamino v M Mann.
    I would hope the better man wins

    [Response: Friends disagree, scientists dispute. I would hope the better ideas win.]

  33. So, I read Fyfe, et al. I know it was a Nature Climate Change comment, but it seems to me that if one is going to explain hiatuses in this manner, a budget of the forcings for each interval should be constructed and presented. That would look like Figures 6 and 7, and Table II of doi: http://dx.doi.org/10.1175/JCLI-D-15-0063.1 as an example.

    I’ll have more to say, probably in arXiv, over the next few weeks.