Climate deniers Ross McKitrick and John Christy have published an article in the Journal of Hydrology which demostrates to those who know what they’re doing, that McKitrick and Christy don’t. If you really don’t know what you’re doing you might think this paper is impressive. If you do know what you’re doing, this paper is a supreme embarrassment to its authors. It’s a supreme embarrassment to the reviewers who approved its publication — they too can’t know what they’re doing. This paper is that bad.
Their theme is that rainfall time series show long-term persistence (“long memory”), which can make their statistical properties incredibly hard to pin down — uncertainties are so high it makes trend analysis supremely difficult. How do they show long-term persistence? They start by saying this:
Figure 8 shows the first 200 lag autocovariances (except the first lag) of the two proxy series. For comparison, the dashed lines show the corresponding autocovariances of a 1-lag autoregressive  (AR1) model with an AR coefficient of 0.9. Even though that would be considered a very “red” series, namely one with strong autocorrelation, it is nonetheless clear that its autocovariances decay exponentially and by about 75 lags they have vanished, yet those of the proxy drought series exhibit no such tendency. This indicates, in an informal way, that the drought series exhibits “long memory” which we will herein characterize using an LTP model.
Here’s the same “analysis” of a different time series (of the same length):
By McKitrick’s and Christy’s logic, this too indicates that the series exhibits “long memory.” The problem is, it doesn’t. The data series is what’s called “white noise.” It’s the simplest kind of noise, which doesn’t have any autocorrelation at all; all the real autocorrelations are zero. I know, because this series came from the random-number generator on my computer.
How then does it show the same behavior of its autocovariance? Simple. The values I plotted (and those shown by McKitrick and Christy) are only estimates of the actual autocorrelation. If you know what you’re doing in statistics, you know this. You even know how to calculate just how uncertain those estimates might be. The true autocorrelations really are zero, but of course the estimates are not exactly zero. They too are random, with uncertainties given by well-known formulae.
You want proof? If you have the chops, generate your own random (white noise) time series and do the calculation yourself, then announce the result to the entire world. McKitrick and Christy: I dare you.
I took their time series for the Pacific Coast (the “PC” series) and I fit an actual AR(1) model (the one McKitrick and Christy deny). I then took the residuals from that model, and applied an actual statistical test for evidence of any other autocorrelation. It’s called the “Box-Pierce test” (or you can use the Box-Ljung test and get the same result). What does it say? That the p-value is 0.8848. If it were low — below 0.05 — we’d have evidence that there is autocorrelation besides the AR(1) we already removed. But a p-value of 0.8848 means there’s no evidence. None. Nada. Zip. Squat. The p-value is so high that it’s evidence of the lack of further autocorrelation.
This is basic, folks. This is first-year-course-in-time-series-analysis basic. This is “if you really know how autocorrelation estimates work you would never make such a ridiculous claim” basic. And that’s just the tip of the iceberg. This paper is that bad.
What can we conclude? I see only two possibilities.
1: McKitrick and Christy don’t know what they’re doing. That’s bad.
2: McKitrick and Christy do know what they’re doing. That’s worse.
This blog is made possible by readers like you; join others by donating at My Wee Dragon.