Climate deniers Ross McKitrick and John Christy have published an article in the Journal of Hydrology which demostrates to those who know what they’re doing, that McKitrick and Christy don’t. If you really don’t know what you’re doing you might think this paper is impressive. If you do know what you’re doing, this paper is a supreme embarrassment to its authors. It’s a supreme embarrassment to the reviewers who approved its publication — they too can’t know what they’re doing. This paper is **that** bad.

Their theme is that rainfall time series show *long-term persistence* (“long memory”), which can make their statistical properties incredibly hard to pin down — uncertainties are so high it makes trend analysis supremely difficult. How do they show long-term persistence? They start by saying this:

Figure 8 shows the first 200 lag autocovariances (except the first lag) of the two proxy series. For comparison, the dashed lines show the corresponding autocovariances of a 1-lag autoregressive [12] (AR1) model with an AR coefficient of 0.9. Even though that would be considered a very “red” series, namely one with strong autocorrelation, it is nonetheless clear that its autocovariances decay exponentially and by about 75 lags they have vanished, yet those of the proxy drought series exhibit no such tendency. This indicates, in an informal way, that the drought series exhibits “long memory” which we will herein characterize using an LTP model.

Here’s the same “analysis” of a different time series (of the same length):

By McKitrick’s and Christy’s logic, this too indicates that the series exhibits “long memory.” The problem is, it doesn’t. The data series is what’s called “white noise.” It’s the simplest kind of noise, which doesn’t have any autocorrelation at all; all the real autocorrelations are zero. I know, because this series came from the random-number generator on my computer.

How then does it show the same behavior of its autocovariance? Simple. The values I plotted (and those shown by McKitrick and Christy) are only *estimates* of the actual autocorrelation. If you know what you’re doing in statistics, you know this. You even know how to calculate just *how* uncertain those estimates might be. The true autocorrelations really are zero, but **of course** the estimates are not exactly zero. They too are random, with uncertainties given by well-known formulae.

You want proof? If you have the chops, generate your own random (white noise) time series and do the calculation yourself, then announce the result to the entire world. **McKitrick and Christy: I dare you**.

I took *their* time series for the Pacific Coast (the “PC” series) and I fit an actual AR(1) model (the one McKitrick and Christy deny). I then took the residuals from that model, and applied an actual statistical *test* for evidence of any other autocorrelation. It’s called the “Box-Pierce test” (or you can use the Box-Ljung test and get the same result). What does it say? That the *p*-value is 0.8848. If it were low — below 0.05 — we’d have evidence that there is autocorrelation besides the AR(1) we already removed. But a *p*-value of 0.8848 means there’s *no* evidence. None. Nada. Zip. Squat. The *p*-value is so high that it’s evidence of the *lack* of further autocorrelation.

This is basic, folks. This is first-year-course-in-time-series-analysis basic. This is “if you really know how autocorrelation estimates work you would **never** make such a ridiculous claim” basic. And that’s just the tip of the iceberg. This paper is **that** bad.

What can we conclude? I see only two possibilities.

**1**: McKitrick and Christy don’t know what they’re doing. That’s **bad**.

**2**: McKitrick and Christy do know what they’re doing. That’s **worse**.

This blog is made possible by readers like you; join others by donating at My Wee Dragon.

Worest is that it passed peer review.

That is embarrassing. I checked the paper because I found it so hard to believe. Did they never compute an autocorrelation function before and played around a bit to see how it works with a few idealized datasets? If you develop your own home brew method to estimate LDR, you would think that you would test it. Utterly bizarre.

The reviewers apparently never brought it up. The editor, Andras Bardossy, is a good statistician and works a lot on such statistical estimates, he would have understood that that home brew method does not work.

The paper also applies a test from the literature I have not heard of before, a very old one from 1994, as if nothing has improved in numerical methods the last 25 years.

That being said, some LRD is quite normal for observational datasets. Whether that is a due to the climate system or due to observational errors is another matter. For example, in case of temperature station data, the long range dependence parameter was halved due to homogenization (and homogenization is not perfect, so likely even more is an observational error). In case of tree proxies you would expect that trees coming in and out of the composite series creates correlations over the time span the trees are old.

Rust, H.W., O. Mestre, and V.K.C. Venema. Less jumps, less memory: homogenized temperature records and long memory. JGR-Atmospheres, 113, D19110, 2008. https://doi.org/10.1029/2008JD009919

As someone who has published in the Journal of Hydrology, and has considered it to be one of the best journals in the field, I am really disappointed.

I peer review occasionally for AMS and ASCE journals. I can’t help but wonder if I would have been able to identify the error here. All the R values under +/- 0.05 should have been a red flag that that figure is not what it looks like on first glance.

I can’t access the actual paper yet … is that really the primary argument to support their conclusions?

It is a good journal. I guess mistaken happen everywhere.

Not sure whether one is allowed to link to it, but there is this hub where you can download nearly all scientific articles. Handy when at home.

[

Response:OK by me to link to it.]Since Victor seems to have forgotten: it is sci-hub.tw

(see the Wikipedia entry for more info: https://en.wikipedia.org/wiki/Sci-Hub)

“Climate deniers Ross McKitrick and John Christy have published an article in the Journal of Hydrology”

When the first half of your first sentence alone is enough to give away the plot…

The advances will be made when we find strong correlations in what appear to be erratic data such as ENSO

I was horrified to find a much upvoted secondhand version of this, quoted by Christie, on Quora. Sadly, there are swarms of misinformation on the rise everywhere. They get away with it because it looks “technical” and many readers can’t tell the difference, just like the message, and they bring their fans along. Quora used not to have so much of this kind of thing. I reported it, but denial’s “approval” rate is on the rise in places where they can get away with it. I wish I had had this evaluation to reference, thanks.

“All the R values under +/- 0.05 should have been a red flag that that figure is not what it looks like on first glance.”

Yup. Here’s another way to show their “logic” using random data and the standard acf() function (in R).

—————

# construct series of 2000 random values,

# run acf() with lag.max set to 200

Obj = acf(runif(2000),lag.max = 200,plot=FALSE)

# standard acf() plot but w/ points instead of vertical bars

plot(Obj, type=”p”)

# same plot w/ lag 0 value of 1 omitted as they do–very likely to help

# obfuscate things–and ylim scaled as they do.

plot(Obj, type=”p”,ylim=c(-.1,.1))

—————-

Look familiar? (Well except for including the CIs here unlike M&C?)

2 classic denier strategies going on here:

1. “Accidentally” neglecting to include error bars.

2. “Accidentally” failing to use full and/or correct context.

Or…maybe runif() exhibits “long memory”?! Just the sort of thing those dishonest scientists would do to fool the public.

Also, I note at first glance at their code that rain seems defined as greater than 25.4 mm. It might be nice to explore other values to provide a fuller picture.

Lastly, I cannot find the required files “data/names.csv” and “data/names2.csv” on first examination of their code and published supplementary data files. Since there are 21 named sheets for 20 locations in the Excel file, I guess I need to dig around a bit more to isolate the 2 sets of 10 locations. That said, I’m pretty sure once I access the full data we will see much the same acf() output or close to it.

Added: Not sure yet what the data is in the “proxy.txt” file they post, but running acf() as above on col 2 of it produces the same basic results.

[

Response:Those are the long-term estimates of PMDI (Palmer Modified Drought Index) for the Pacific coast (PC) and southeast (SE). With them, your procedure (same as mine) can reproduce their figure 8 exactly.]Ahh…PC is Pacific and SouthEast. Examining col 4 shows a single significant autocorrelation at lag 1 before going insignificant.

Sounds too easy to NOT submit a comment to the journal, Tamino. And if that is too much work…submit one to PubPeer. The authors get notified if you fill out their e-mail addresses, and it will be available to anyone for all eternity.

Initial impression is that you have found something here, Tamino. I am surprised as McKittrick is known for his solid statistical skills and knowledge (at least that is his reputation which I have had no reason to doubt). I haven’t read the paper so don’t know how central it is to their paper’s main area of research and conclusions but from reading the abstract it sounds like it is quite central.

Simply eyeballing their graphs and yours, the white noise AC’s mean to about 0 (demonstrating clearly that there is nothing going on but randomness) but the Pacific ones clearly mean positive (maybe about 0.02 or so) and the SouthEast ones show some patterns. Such low AC’s hardly amount to anything though so it’s hard to know what claim one could make of it all.

McKittrick is known for “solid statistical skills”??? Is this the same McKittrick that doesn’t know the difference between measuring latitude in degrees vs. radians?

http://crookedtimber.org/2004/08/25/mckitrick-mucks-it-up/

He wouldn’t be the first nor the last to make an honest error reading degrees where it should be radians or vice-versa (recall making the same error around 30 years ago in power system modelling I was doing).

Joe H:

Did you tell the whole power system industry that they had all the science of power systems wrong, and publish the results internationally, before you discovered your error (or had it pointed out to you)?

Or did you do what most reasonable people do when they calculate a result that is much different from what most experts in the field think is correct, and says to yourself “Hmmm. Maybe I should check my work carefully. I might have made a mistake.”

I have respect for people that do only one of those two things. I will leave it to you to guess which.

There’s also this one:

https://quantpalaeo.wordpress.com/2014/09/03/recipe-for-a-hiatus/

As Richard Telford notes – a recipe for a hiatus. Of course, I realize that there is the null hypothesis that McKitrick made a statistical mistake and the alternative hypothesis that his solid statistical skills were used to misinform…

If one wants to find out if a series displays long term persistence, auto-correlation analysis is not the way to go. People who do reservoir design determine Hurst coefficient by developing a so-called residual mass curve. It is quite surprising that Hurst’s work is almost never mentioned in climate science circles. Always time to learn.

So you’ve ruled out long term persistence with this?

You’re NUTS

I think what has been ruled out is inferring that this study using the published methodology provides even a tittle of evidence of long term persistence. One would be quite NUTS themselves to make such an inference from the methodology they employ.

But then many committed deniers are, in fact, NUTS. Especially those who write such things as: “I’m a thorough carbon dioxide climate change denialist and I wouldn’t believe in carbon dioxide induced climate change at gunpoint – because it is physically impossible.” TRULY nutty. And evidence of extreme crankery as well.

What is asserted without evidence can be dismissed without evidence.

I would suggest that you look at what the data say if you are smart enough.

Tamino, both you and they call it autocovariance in your graphs. Is it not, strictly speaking, autocorrelation in this instance?

[

Response:Yes.]