Sheldon Walker commented on my most recent post about his most recent post. It began thus:
Sheldon Walker | February 7, 2018 at 12:10 am | Reply
Sheldon Walker: Oh, oh, I see, running away then. You yellow bastards! Come back here and take what’s coming to you. I’ll bite your legs off!
I’ve got to give you credit; you do have a sense of humor.
A quick update on some comments that you made.
1) That I would not be able to learn how to do a linear regression with a correction for autocorrelation, off the internet.
– I will give you half marks for this comment. I found most of the stuff on the internet about autocorrelation too technical. I wanted a practical example. So I had to work it out for myself. I triple checked my results to make sure that they were right. For each date interval I did 3 linear regressions:
1) first regression with no correction for autocorrelation. Used this to compare to the other regressions, to make sure that values were reasonable.
2) added lagged temperature anomaly. Did linear regression to work out how much autocorrelation was present. Over 741 regressions the average was about 0.58 (that figure is from memory, may be wrong). The amount of autocorrelation varied from about 0.4 to 0.7.
3) corrected the temperature anomaly for autocorrelation. Did another linear regression to make sure that it was gone. All 741 regressions had a residual autocorrelation of about 1.0 x e^-15
2) That I would not learn how to do a linear regression with a correction for autocorrelation overnight.
– You get full marks for this comment. It took me 6 days. Considering that I have a full time day job, I think that 6 days is ok.
I’m not convinced you’ve properly corrected for autocorrelation, some of the things you say make me suspect not. But I don’t know enough about the details to be sure.
You can save yourself a lot of trouble by studying annual averages instead of monthly averages. Their autocorrelation is far less than that of monthly data; there’s still some which lingers, and it makes it too likely to declare “significant!” when it’s not really, but the autocorrelation is so much less that at least your results will be “in the ballpark.”
You might think that using annual rather than monthly averages severely reduces the precision of regression. Counterintuitively, this is not so; the loss of precision is negligible. A (somewhat technical) illustration can be found in Foster & Brown, see section 4. If basic analysis indicates far greater precision with monthly rather than annual averages, it’s a sign that autocorrelation invalidates the apparent precision of analyzing monthly data without autocorrelation correction.
I don’t recall saying you wouldn’t be able to figure out how to apply an autocorrelation correction “overnight.” Perhaps I wasn’t clear enough, but what I meant was the much more general comment that you couldn’t learn enough about trend analysis in general in this situation to do it right overnight. I’ll stand by that statement. If you can point to a direct quote of my referring specifically to autocorrelation correction in my statement of the need for more time to learn it right, then I’ll stand corrected.
I have fully analysed the GISTEMP data using the method that I developed. I found 9 slowdown trends which are significant at the 99% confidence level. They mostly start in 2001 and 2002. The longest lasts 14 or 15 years (sorry, can’t remember which – I don’t have the results with me). There were also 2 slowdown trends which are significant at the 95% confidence level, and 2 or 3 which were significant at the 90% confidence level.
Interestingly, there were 6 slowdown trends which were significant at the 90% confidence level, which started in 1997 and 1998. The famous slowdown that warmists say is due to the 1998 El Nino. They are much less significant then the 2002 slowdown.
No, you didn’t find “9 slowdown trends which are significant at the 99% confidence level.” You only thought you did, because you hadn’t allowed for autocorrelation or for the multiple testing problem.
No, you didn’t find “6 slowdown trends which were significant at the 90% confidence level,” because you still hadn’t taken the multiple testing problem into account (or the “broken trend” issue, but we’ll leave that for another day).
If you haven’t already done so, you really need to read this. It shows that you can get apparently significant trends or trend changes (lots of them) in plain old random noise, when multiple testing is not allowed for. If you really believe that plain old random noise can show real trends, then you need to “check yourself.”
The multiple testing problem isn’t something I made up. But, even scientists have a hard time with it. It’s a very real phenomenon, in fact it’s one of the main reasons that Fyfe et al. believed that had confirmed a “slowdown” when in fact they hadn’t (Stephan Rahmstorf, Niamh Cahill, and I published research about that specifically). Jim Breyer (Duke Univ.) is on something of a “mission” to make it better understood, especially in medical research. I’m trying to make it better understood in climate science. Time will tell how well I succeed.
Watch out for my results on WattsUpWithThat. It will be a while, because I want to include the same analysis for UAH and RSS. I have not done those yet.
I offered warmists a compromise about slowdowns. You turned it down. You won’t be offered it a second time.
Somehow, that news fails to disquiet me.
It seemed to me that the “compromise” you offered was to acknowledge that a “slowdown” in global surface temperature didn’t necessarily mean a slowdown in global warming. We agree on that. What I have tried to make you understand is that you haven’t found reliable evidence of a slowdown in the warming rate of surface temperature. You think you have, but you haven’t.
Seriously: I can tell you work hard at this. But seriously: you’ve still got a lot to learn. There’s no reason you can’t learn it, but it takes time, and a healthy dose of humility will help.
This blog is made possible by readers like you; join others by donating at My Wee Dragon.