Trend: Pat Michaels and Ryan Maue ride the crazy train

A reader asked that I estimate the trend in the JRA-55 data for global temperature, because it is touted by climate deniers Patrick Michaels and Ryan Maue. Let’s have a look. Here’s the data:


These aren’t observed data, they’re reanalysis data, the output of a computer model which uses observed data of many kinds to guide a weather simulation. Reanalysis data has some advantages, particularly that we can use the laws of physics to derive estimates of things we haven’t been able to observe. But for something like global mean temperature, one is better advised to use that instrument designed to measure temperature: the thermometer.

Undaunted, we forge ahead. Step 1 is to estimate the linear trend, which really represents the average rate of global warming during the 40-year period of record for these data. We have to account for some tricksy aspects like autocorrelation of the noise, but we have tools for that. Least squares regression (corrected for autocorrelation) says the rate is 0.17 +/- 0.03 °C/decade (95% confidence interval).

We can also use a non-parametric trend estimate; I’m fond of Theil-Sen regression, also good is “L1 regression,” and both of them give the same answer as least squares regression: 0.17 +/- 0.03 °C/decade.

But … maybe the true trend isn’t just a straight line. I looked for changes in the rate of global warming, using polynomial regression, changepoint analysis applied to linear splines, and the analysis of variance. All of them give the same result: no evidence of any trend change during this time span. Maybe the trend changed — but there’s no proof, there’s not even any evidence.

The rate estimated from the JRA-55 data is in excellent agreement with the rate estimated from other global data sets (the ones based on thermometers, note rates are in °C/yr).

All data sets — including the JRA-55 data — say that the rate since 1979 cannot possibly be as low as 0.1 °C/decade (0.01 °C/year):


Patrick Michaels and Ryan Maue have this to say about the JRA-55 data:


Figure 2. Monthly JRA-55 data beginning in January, 1979, which marks the beginning of the satellite-sensed temperature record. The average warming rate is 0.10⁰C/decade and there’s a clear “pause” between the late 1990s and the beginning of the recent El Niño.

Patrick Michaels and Ryan Maue proved one thing: that for trend analysis they are incompetent.


Thanks to the kind readers who help support this blog. If you’d like to help, please visit the donation link below.


This blog is made possible by readers like you; join others by donating at My Wee Dragon.


20 responses to “Trend: Pat Michaels and Ryan Maue ride the crazy train

  1. Even just using the Mark One eyeball on the graph suggests that 0.1C/decade is an error. Over 40 years, that would mean the total warming would be 0.4C, and the only way to get that is to say that the correct anomaly in 1979 was -0.2C, and the correct anomaly in 2019 is +0.2C. You can only get that if you ignore the downward dip in the pre-19080 part of the graph, and ignore anything from 2016-18 and pretend the downward tendency at the end will continue to the 0.2C goal.

    And the “pause”? Only if you focus on the peaks in the late 1990s and the lowest point in the noise in 2019. Typical denier crap.

  2. Lars Träger

    I guess they use this series to “prove” that “warming stopped in 2016, and temperatures are now going down”. They are probably as simple as that.

  3. JRA-55 does have a momentarily-unique quality – being the only surface temperature record complete for all 2019, abet a reanalysis not a measured record. And despite the absense of identifiable acceleration, JRA-55 puts 2019 into 2nd warmest year-on-record and also puts December 2019 as both the highest anomaly of 2019 and the warmest December on record. Given the absence of El Niño, the December 2019 anomaly is surely “scorchyissimo!!!”
    What I find a little strange is that, with the last five years the warmest on record, and warmest by a considerable amount (the five years together averaging +0.17ºC above what was previously the warmest year), yet there is no statistical measure that suggests acceleration, or at least some form of step up. So, with the hottest December-on-record behind us, it will be interesting to see what this all looks like at the end of 2020.
    2016 …. …. +0.58ºC
    2019 …. …. +0.51ºC
    2017 …. …. +0.45ºC
    2015 …. …. +0.39ºC
    2018 …. …. +0.37ºC
    2005 …. …. +0.29ºC
    2014 …. …. +0.27ºC
    2010 …. …. +0.26ºC
    1998 …. …. +0.25ºC
    2002 …. …. +0.25ºC

    • In order to beat an expiring horse one more time, “yet there is no statistical measure” means, in my opinion experience, haven’t looked far afield enough.

    • To see acceleration in a noisy time series, you need very long series.

      Or you need to search the surge.
      http://variable-variability.blogspot.com/2017/06/pause-hiatus-recent-warming-explosion.html

      • @Victor Venema,

        To see acceleration in a noisy time series, you need very long series.

        Given the context, that is, global temperatures, this may be true. However, FWIW, in general this statement is ambiguous. It may or may not be true depending upon what’s meant by “noise” (colloquial usage or technical?), the size of the (truly random) noise (coefficient of variation), whether or not the series is stationary (in the sense that it’s fundamental parameters are invariant), which quantile of the series is of interest, and what is known about the process which contributes the “noise”.

        In the last instance, one of the best recent scientific examples is LIGO, where the signals are faint, rare, and unpredictable in time, but the instrument in situ is really, really well understood.

        I would also suggest moving (tapered) window analyses, whether in time or frequency, offer another set of samples which could contract the statement in other contexts.

  4. Did the M&Ms post the work they did to come up with their 0.10⁰C/decade number, perhaps someplace other than the other than the WUWT post?

    If not, I give ’em a zero on the exercise.

    Thanks @Tamino.

    But it bugs me: If they didn’t show the work and were just declaring 0.10⁰C/decade , why not pick a GISTEMP as the source of the data? Are they that devious and malicious that they picked what to many might be an obscure serious to analyze and upon which to base their conclusions?

    What would be a blast is to dissect their supposed calculation and see where they erred. I somehow think that’s not what it’s about. I think it’s tweet fodder.

  5. “and there’s a clear “pause” between the late 1990s and the beginning of the recent El Niño.”
    Of course they fail to mention the El Nino in the late 1990s, when the pause is supposed to have started.

  6. Noteworthy is that this piece by Maue and Michaels is more than two years old. My impression is that Maue has left the “dark side” of science since, probably convinced by data. Generally, I think he is decent with science today.
    Following him on Twitter, one can often see him defending climate science against diverse crazy people that still follow him because of his old reputation.
    I often cite him when he correctly claims that the CFSR/CFSV2 reanalysis is crap because of a cooling version break in 2011. (Cherrypicking climate deniers have of course found this dataset and love it).
    As I remember he indirectly called his boss Joe Bastardi something like “ideologically motivated nutter” and left WeatherBell soon after..

  7. Tamino, what method do you use when correcting OLS for autocorrelation?

  8. Dr. Foster, thank you for the response rebutting the claims of Michaels and Maue. I knew what they were saying was trash, but based on your published work I thought you could give a more competent analysis than me.

    I don’t want to seem like I’m ordering you around. But if you’re interested, there are some worse temperature trend forecasts from the GWPF denialists Judith Curry and Anastasios Tsonis; the forecasts are even worse than the nonsense you previously debunked from Curry. I was going to post a rebuttal to them in a few weeks after all the 2019 surface data came in, but I’m fine with you scooping me now.

    Curry+Tsonis basically predicted that, due to a climate shift, the warming of the 1980s and 1990s was over, and/or post-2002 cooling would occur. Their predictions are from around 2013, giving some time for more years of evidence to accumulate debunking them. So for whenever you’re interested and have time, here are links to the predictions:

    “A year earlier, Jan 2011, I made it pretty clear that I supported Tsonis’ argument regarding climate shifts and a flat temperature trend for the next few decades”

    The 97% ‘consensus’: Part II

    “This period since 2002 is scientifically interesting, since it coincides with the ‘climate shift’ circa 2001/2002 posited by Tsonis and others. This shift and the subsequent slight cooling trend provides a rationale for inferring a slight cooling trend over the next decade or so, rather than a flat trend from the 15 yr ‘pause’.”

    Week in review

    “Professor Anastasios Tsonis, of the University of Wisconsin, said: “We are already in a cooling trend, which I think will continue for the next 15 years at least. There is no doubt the warming of the 1980s and 1990s has stopped.””
    https://www.telegraph.co.uk/news/earth/environment/climatechange/10294082/Global-warming-No-actually-were-cooling-claim-scientists.html

    • Oh, I forgot to mention: the period from the beginning of 2002 to the end of 2019 covers 18 years. So in case Curry’s defenders whine that this is too short a period of time for evaluating Curry’s and Tsonis’ claims, I leave them with Curry’s own words:

      “I understand that 15 years is too short, but the climate model apostles told us not to expect a pause longer than 10 years, then 15 years, then 17 years. Looks like this one might go another two decades.”

      The 97% ‘consensus’: Part II

      “3) Periods meeting the criteria of either 1) or 2) are particularly significant if they exceed 17 years, which is the threshold for very low probability of natural variability dominating over the greenhouse warming trend.”

      Hiatus controversy: show me the data

      Thus it would be special pleading for Curry’s “apostles” to adhere to Curry’s claims claim, while complaining that the 18-year post-2002 period is too short.

      • 17 years is ABSOLUTELY NOT any such “threshold for very low probability of natural variability dominating over the greenhouse warming trend.” when multiple comparisons are being made. This statement is completely wrong. In point of fact if I sample enough data finding stretches of 17 years becomes virtually certain. Like so-called “significant” runs of heads in a long series of flips.

        Curry actually said this?

      • Re: “Curry actually said this?”

        Yes, she did. Dr. Foster addressed it here:

        “The “17-year” thing is based on faulty analysis by others, which ignores the multiple testing problem. But I don’t expect Judith Curry to get that either.”

        Judith Curry’s Brain goes on Hiatus

        The “faulty analysis” being referred to, and on which Curry is likely relying, is this paper: 10.1029/2011JD016263. You can tell that’s the paper she’s referring to, because she says the following elsewhere:

        “This is the issue addressed by Santer et al., searching for the AGW signal amidst the natural variability noise. Santer et al. argue that “Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.””

        Pause (?)

        Yes, Curry confused bulk tropospheric temperature trends with surface temperature trends, even though the former would likely have greater inter-annual variability than the latter, due to factors such as tropospheric amplification, as per the negative lapse rate feedback (a.k.a. the hot spot).

        Dr. Foster can correct me if I’m wrong, but I think the “multiple testing problem” (or “selection bias”) he’s referring to is akin to p-hacking, in which one keeps looking around in data to mine a desired result, without taking into account how those multiple chances at getting said result affect the statistical significance of finding that result. For more context on that, folks can read sources such as: 10.1088/1748-9326/aaf342/meta , 10.1088/1748-9326/aa6825 , 10.1016/j.earscirev.2018.12.005.

      • Re. multiple comparisons…

        Run the following R code:

        ##############################################
        # Generate 100 1’s and 0’s,
        # Run length encode to check for “significant” runs (n <= 5)
        set.seed(12345)
        rle(sample(c(0,1), 100, replace = TRUE))[[1]]
        ##############################################

        I'd be curious to see how Curry or any other "pausist" "explains" the (count them!) FIVE "significant" runs of 1's and 0's here in a string of 100 random 1's and 0's. One run is 8 in a row and is "highly significant" at p <- ,004. However, I certainly see precisely ZERO reason to to cite ANY of these runs to be examples of "Periods meeting the … threshold for very low probability of natural variability dominating over the greenhouse warming trend.” Not only are such periods not infrequent, they are statistically quite expectable by chance alone given the size of the residual error compared to the trend CI.

        See the Skeptical Science Escalator for an example using actual climate data. https://skepticalscience.com/graphics/Escalator500.gif

  9. Øyvind Seland

    Any ideas on how to make people understand is that if you look into single climate model runs and not the average as you usually see, the model simulations may be as chaotic as measurements. Climate models have many weaknesses, but they do get El Ninos and 10 year “flat” temperature periods.

  10. It appears to me that Pat Michaels and Ryan Maue confused ‘average temperature anomaly’ with ‘average warming rate’. Their “0.10” sure looks like it could be the average of the displayed temperature anomalies.