Truly, he doesn’t know. He hasn’t got a clue. As usual, he doesn’t even suspect that he doesn’t have a clue. When he finds out about it — which he probably will, because one of you will comment about it at WUWT and the comment will be deleted (they can’t allow pesky truth to appear at WUWT!) but he’ll still find out — he probably won’t believe it.
Bob Tisdale has another post at WUWT which is actually titled “Warming Rate in the US Slowed during the Recent Warming Period.” Apparently he believes this — because he knows not what he’s doing.
Here’s how he got there. Take the annual average temperature for the USA (contiguous 48 states) from 1895 to 2012 according to the National Climate Data Center:
Now pick some really, really cherry time spans. For “early” pick 1917 to 1934 (18 years) and compute a linear-regression trend estimate. Result: +0.997 deg.F/decade. For “late” pick some time spans ending with 2012 and compute a linear-regression trend estimate. Use 1979-2012 (34 years), rate +0.537 deg.F/decade, use 1993-2012 (20 years), rate +0.674 deg.F/decade. Since the “early” rate is higher than the “late” rates, declare victory, claim that the U.S. was warming faster back then than it is now.
How could he do that? It’s easy.
When you pick a start data because it gives the result you want it’s called “cherry-picking.” We’ve already shown that when you allow yourself to cherry-pick the start year you can get some really big trend rates — just by accident — because you give yourself so many choices there’s bound to be one that gives you what you want — just by accident. But Tisdale has gone one better. He allowed himself to pick the start and end years to get what he wanted. When you allow yourself that many choices, you can get some really huge results. Just by accident.
Tisdale’s “early” time span covers 18 years, his “late” ones 20 and 34 years. Suppose I allow myself to pick any time span, of any length between 15 and 35 years, hoping to find a big upward trend — in random noise. I generated 100 artificial temperature data sets for 1895 through 2012 with no trend at all, and the same standard deviation as U.S. temperature (which is a lot bigger than global temperature). Then I found the highest “early” upward trend in any 15-to-35 year long time span, which of course isn’t real because the data are random noise. I also noted the largest “late” upward trend in a 15-to-35 year long time span which ended at the final year. What kind of trend rates can you get from random noise that way?
Note that the median “early” trend is greater than that observed by Tisdale. That’s how easy it is to get big trend estimates when you’re allowed to cherry-pick BOTH the start and end dates of your time span.
What’s a realistic comparison of earlier vs. recent trend rates? Here’s the rate estimated from the lowess smooth used in the first graph:
Note that the recent trend is bigger — and has lasted a whole lot longer. Here’s the rate estimated from each 30-year time span in the data, with 2-sigma error bars and the estimate from the lowess smooth superimposed:
Note that the recent trend is bigger — and has lasted a whole lot longer.
Tisdale doesn’t even suspect that his claim is nothing but the inevitable result of allowing so much room to cherry-pick, he couldn’t help but get what he wanted. It ain’t real.
Try telling that to Bob Tisdale. Seriously — try.
This is what we’re up against. Fake skeptics who don’t know what they’re doing. They make sciencey-sounding arguments, give numbers, and give their claims outlandish false titles. The pity is that the vast majority of plain old everyday folks can’t see the fakery in this kind of fake claim.
Even Bob Tisdale can’t tell. Because he doesn’t know what he’s doing.