Sheldon Walker seems to be desperate — desperate to believe that global warming exhibited a “slowdown” recently. His latest attempt to prop up his faulty belief is a new post at WUWT titled “*Proof that the recent global warming slowdown is statistically significant (at the 99% confidence level)*“. He mainly demonstrates that he has a lot to learn about statistics, but isn’t learning and doesn’t know how inadequate is his own knowledge.

His latest attempt amounts to this: estimate the warming rate of global temperature (data from NASA) over 10-year periods (actually, 10 years and 1 month but that’s not really important in this case). Test whether the 10-year+ rate is different from the average rate over the period from January 1970 through January 2017 (0.01782 deg.C/yr). He performs his tests at the 99% confidence level (*p*-value 0.01) rather than the more customary 95% confidence level (*p*-value 0.05) in order to have greater confidence in the results. All analyses are performed with Microsoft Excel.

Then he graphs the results, with periods showing *significantly* faster-than-average warming as red dots and periods showing *significantly* slower-than-average warming in blue, with periods for which the 10+ year rate is not statistically different from average in gray. And here’s his plot:

His discussion focuses on the set of 5 “final years” in a row near the end for which the warming rate over the 10+ year span is “significantly” slower than the average. This, he believes, establishes the “proof” he promised. He doesn’t say explicitly whether or not he regards the years with “significantly” faster-than-average warming as meaningful, he’s primarily interested in claiming to have proved a slowdown. I wonder whether he actually believes there were both? I wonder whether he wonders why there are *so many* slowdowns and speed-ups? I wonder whether he has thought about how it is that the 10+ year period ending with 2015 is “significantly” slower than average which that ending with 2017 is “significantly” faster? Those two time spans overlap by 8 years out of 10 … yet one is “significantly” above average, the other “significantly” below average?

If he’s reading this, I wonder whether he wonders why I keep putting the word “significantly” in quotes?

The reason is: what he calls “significant” isn’t.

Computer programs which compute trend lines, including Microsoft Excel, and R, and others, test for statistical significance and also compute uncertainties in the rate of increase or decrease. But they base these calculations on an assumption that the *noise*, the *random* part of the data, is what statisticians call *white noise*. That’s the simplest kind of noise, in which each noise value is uncorrelated with all the other noise values. When the noise really is that way, then their calculations are correct.

But not all noise is the same. Noise values actually can be correlated with nearby noise values and still be just noise, just random. Correlation between noise values is called *autocorrelation*. It changes the behavior of analysis like fitting a trend line. If the autocorrelation is positive — as it is for global temperature — then the uncertainty in a trend estimate is higher than what you’d calculate for white noise. If the autocorrelation is strong — as it is for monthly average global temperature — the uncertainty in a trend estimate is *much* higher than what you’d calculate for white noise. And, the statistical significance of a test is *much* less than what you’d calculate for white noise.

That’s why the values he claims are “significant,” aren’t.

This has been often discussed with regard to global temperature. It’s well known — by those who know what they’re doing. It’s not known by Sheldon Walker.

He has commented here before, specifically about the topic of the non-existent “slowdown” he’s so desperate to believe was real. I’ve posted on that topic, and specifically about his claims regarding the topic. For example, I said in this post:

If you apply some actual statistics to the trend estimates, you’ll find that none of the departures from constant-warming are significant. The statistics can be pretty tricky because the noise isn’t the simplest kind (referred to as “white noise”), it’s autocorrelated noise, and there are other statistical issues too (like “broken trends” and the “multiple testing problem”). But when done right, one finds that (to repeat myself) none of the departures from constant-warming are significant.

Sheldon Walker ignored the effect of autocorrelation on the uncertainty of trend estimates and their statistical significance. By doing so, he came to the false conclusion that the had “proved” the non-existent “slowdown” actually existed. A few comments on his blog post have pointed out the need for an autocorrelation correction, but so far he hasn’t addressed that. Sorry, Sheldon, ignoring it won’t make it go away.

IF he learns something from this, then he’ll realize his mistake. IF he’s intellectually honest, he will admit it — not buried in the comment section or as a “correction” at the end of his post, but right at the top of his blog post, announcing in no uncertain terms that his analysis is wrong and his conclusion is wrong.

What are the odds?

This blog is made possible by readers like you; join others by donating at Peaseblossom’s Closet.

If I recall correctly (I do), there was a post in the not too distant past at WUWT that complained about the autocorrelation in temperature series when you were doing the continuous linear trend change analysis with Chow tests. The complaint then was that autocorrelation (at least strong AR components) makes spurious trends appear in the data, so how could the Chow test be properly applied?

It’s surprising that not only does this post contradict the spirit of that post (OK, that’s not surprising), but that they got it wrong in both circumstances anyway (OK, that’s no surprising either). Because in the first case you were demonstrating a *lack* of trend change across the 1970-present date range, and it showed that *in spite* of autocorrelation; and in this post our author is trying to demonstrate there *is* trend change, and he shows that by ignoring autocorrelation. It’s topsy-turvy over there.

Exogenous variables further complicate these matters. Eventually someone at WUWT will attempt such a correction I suppose. Maybe they’ll even do it correctly. They’ll only be the better part of a decade behind everyone else (they are already).

Reminds me of the statistics quote

“[he uses] statistics in the same way that a drunk uses lamp-posts—for support rather than illumination. “If you’ve seen a lot of graphs like this one, especially of white noise, it’s instantly obvious that there’s some degree of autocorrelation. In most cases, if the value goes above the mean, it stays there for 4 or 5 more points, and the same for negative variations. You don’t even have to calculate anything to see it. Tamino could probably have spotted it with his eyes closed.

One way to vividly see how much of an effect there: is to take a white noise data set of the same length, and smooth it, say using LOWESS. Smoothing introduces correlation, and you can calculate the standard deviation to see how much it’s been reduced. Try three points as the smoothing width, just as an example. Then rescale the smoothed data so that its S.D. matches the original data series. Plot the original and rescaled smoothed data on top of each other, and the difference in the character of the two will be very clear. The smoothed data will look pretty much like the plot Tamino showed, and the original will be much more jagged, with few regions where the value stays up or down by more than one or two points.

He’s ignoring the obvious story in that graph, which is that it’s been warming consistently during the entire period of study, at times at rates of ~4 degrees C per century. (With the second-highest rate coming in 2017.)

I also remember when Anthony Watts rejected the results of BEST (after saying he would accept them no matter the result) in part because they didn’t use an AR(1) model: https://wattsupwiththat.com/2011/10/21/a-mathematicians-response-to-best/

A response on another message board from my nemesis, amstocks82.

http://www.investorvillage.com/smbd.asp?mb=11227&mn=39538&pt=msg&mid=17883164

I’m not allowed to post on IV anymore. The owner kicked me out with no explanation.

Pretty funny, in a D-Kish way.

Ah. So instead of a pause (i.e. no warming) they’re now talking about a slowdown (i.e. first derivative). I wonder when they’ll exhaust that too, and switch to the second derivative.

Isn’t there another, likely bigger, and more fundamental, problem w his post? Namely, that his datapoints reflect “warming RATE”? That immediately jumped out to me as a potential bad-faith sleight-of-hand by him. If one wants to test for warming, one simply tests for a positive trend in temperatures. It’s still warming even if it’s a constant rate of warming. Right?

[

Response:He’s not testing for “warming,” he’s testing for a warming rate which is different from the 1970-2017 average rate. So I don’t regard his use of rate as in any way bad faith.I probably should have used the word “slowdown” rather than “pause” in the post title, to avoid confusion about that.]“I wonder when they’ll exhaust [argument A], and switch to [something else]”Why, when it’s convenient to either a) affirm their bias, or b) deflect from their failures, or c) peddle their FUD to the unsuspecting, of course.

That’s just so adorable. The completely imaginary “pause” was statistically significant! And with 99% confidence! Free Market Justice Warriors freak me out.

<He performs his tests at the 99% confidence level (p-value 0.01) rather than the more customary 95% confidence level (p-value 0.05)

in order to have greater confidence in the results.Grumble. But, I think, Tamino is just trying to be kind.