# Thirteen

NOTE: see the UPDATE at the end of the post.

Jeff Masters at Wunderblog (part of Weather Underground) reported that for the lower-48 states of the USA, every one of the last 13 months was in the top third of its historical distribution. He calculated the odds of that happening by random chance, in an unchanging climate, being only 1/3 to the 13th power, or a mere 1 chance out of about 1.6 million. Pretty small odds.

However, the calculation isn’t correct because month-to-month temperatures for the USA48 are not independent, they show autocorrelation. If last month was hotter-than-average, this month is more likely to be as well. This increases the probability for a run of 13 top-third months in a row. Michael Tobis pointed this out, among others.

Enter Lucia, who tries to get a better estimate by modelling the monthly temperature as an AR(1) (red noise) process and says the chance that the most recent 13 months will all be in the top third of their distribution is about 10% — way more than 1 in 1.6 million! But Lucia estimates the lag-1 autocorrelation not by using USA48 temperature, but by using global temperature. As a result she uses the rather ridiculous figure of 0.936 for the autocorrelation parameter.

Enter Anthony Watts, who states that he had “started on an essay to describe this meteostatistical failure last night, but Lucia beat me to it.” He pronounces that “Lucia reduces this probability estimate calculation to rubble” and quotes her estimate that the chance is about 10%.

Jeff Masters isn’t the only one to have miscalculated the odds, so did Lucia. At least she recognized this — but as of this writing, Watts still hasn’t mentioned Lucia’s update:

Update Wow! I didn’t realize the US temperatures had such low serial auto-correlation! I obtained data for the lower 48 states here:

Based on this, the lag 1 autocorrelation is R=.150, which is much lower than R=0.936. So ‘white noise’ isn’t such a bad model. I am getting a probability less than 1 in 100,000. I have to run the script longer to get the correct value!

She is now saying that the odds are around 1 in a million. Pretty small. Jeff Masters also noted that his calculation wasn’t correct, but that the odds of 13 in a row in the top third is still pretty darn small.

Last but not and in fact least, enter Willis Eschenbach, contributing what must be the dumbest “analysis” of this situation yet. He suggests that the number of months out of 13 which will be among the top third will follow a Poisson distribution. He then counts how many 13-month periods show each number of top-third months and uses that to fit a Poisson distribution, apparently by least-squares regression. According to Eschenbach, using just 116 June-to-June periods he estimates the Poisson parameter at 5.206, using all 1374 13-month periods he estimates 5.213. The first estimate gives a probability of 13 out of 13 being in the top third as 0.001817, the second gives a probability of 0.001836. Both estimates are much larger than Jeff Masters’ and Lucia’s revised estimates. In either case, we would expect about 2.5 such occurences in the period of record, but only one has been observed (the last 13 months).

First let’s point out that it’s a mistake for Eschenbach to estimate the Poisson parameter by fitting that distribution to observations using least-squares. Deviations from the Poisson distribution aren’t even close to following the normal distribution — they’re deviations from the Poisson distribution. There’s a much easier and better way to estimate the Poisson parameter, and Eschenbach’s method overestimates it by a sizeable amount.

Second, let’s point out that if we accept Eschenbach’s hypothesis, then during the period of record we would expect to have seen about 1 instance of 14 out of 13 months with temperature in the top third of the distribution. What would you guess is the actual probability of that?

This much is clear: in an unchanging climate the probability of the observed event — the last 13 months all in the top third of their probability distribution — is larger than Jeff Masters’ original estimate, but still pretty small.

How small?

We can’t be sure because we don’t know for sure what process governs the noise in temperature data for USA48. But we can come up with a better estimate than 1 out of 1.6 million. We can come up with a better estimate than Lucia’s original 10%, in fact she has already done so. We can certainly come up with a better model than one which predicts we should already have seen a 13-month period with 14 months in the top third.

First let’s take note of a relevant fact. The only reason we’re talking about a 13-month run, rather than a 12-month or 14-month or some other length run, is that the run has been 13 months; the number 13 was chosen because that’s what was observed. If we really want to know how unusual this is (in an unchanging climate) I think it’s more realistic to put the question this way: given that 13 months ago was in the top third distribution-wise, what’s the probability that all of the 12 which followed would be also?

I’ll use data from the National Climate Data Center. USA48 temperature differs from global temperature in many respects. For one thing, signal-to-noise ratio is smaller (i.e., the noise-to-signal ratio is bigger). For another thing, the autocorrelation is smaller — much smaller. In addition, the noise level is month-dependent. There’s much more inherent variation in temperature for winter months than for summer months:

We can compensate to some degree by transforming temperature, not to anomaly — the departure from average for the given month — but to normalized anomaly: departure from average for the given month scaled by the standard deviation for the given month.

The standardized anomalies follow the normal distribution, at least approximately (the Shapiro-Wilk test does not reject that hypothesis), so we can use it to estimate probabilities. But we must still account for autocorrelation. The lag-1 autocorrelation of the standardized anomalies is 0.2185, so we’ll use that figure in an AR(1) model for the noise. This isn’t correct, but it’s a reasonable approximation.

For a standardized anomaly to be in the top third of the normal distribution it must be at least as large as 0.4307. The chance of that happening is just 1/3! But — what if the previous month was also in the top third? For an AR(1) process, a given value is a multiple (the autocorrelation parameter) times the previous value plus white noise

$z_t = \phi z_{t-1} + \varepsilon_t$.

Furthermore, for the variance of the red noise $z_t$ to be equal to $\sigma^2$ (which equals 1 for our standardized anomalies) the variance of the white-noise must be

$\sigma^2_\varepsilon = \sigma^2 (1-\phi^2)$.

The observed standardized anomaly from June 2011 was 0.9724. Let’s treat each of the following 12 months as a normal variable standard deviation given by $1-\phi^2$, and with mean given by its expected value given that June 2010 was 0.9724. That expected value will be 0.9724 times $\phi$ to the power of the number of months after June 2011.

The we can compute the probability of being in the top third for each of the following 12 months, and approximate the probability they will all be so. The result: about 1 out of 458000. That’s pretty small.

The calculation is far from perfect. Mainly, the individual monthly results aren’t independent. A better approach would be to model the following 12 months using a multivariate normal distribution. But then we have to integrate that distribution over a nontrivial region of 12-dimensional space! Maybe there’s some clever way to do so — but I’m not sufficiently motivated to spend effort on that.

Another good approach is, of course, Monte Carlo simulation (Lucia’s approach). Her latest (that I’ve seen) indicates a probability somewhere in the 1-in-a-million range.

This much is clear: the odds of what we’ve seen having happened in an unchanging climate are pretty small. Jeff Masters’ original estimate wasn’t right, but it does appear to be within an order of magnitude. Willis Eschenbach’s is not, quite apart from the fact that it doesn’t make sense.

P.S. If you really want a laugh, check out this comment on Eschenbach’s post.

UPDATE

Lucia’s latest estimate is that the probability (given no climate change) is 1 in 134381. That’s 0.0000074415.

I think the Monte Carlo estimation is a better method than my own calculation. The actual probability remains uncertain, but is surely a heckuva lot less than 10%, or Willis Eschenbach’s Poisson-distribution nonsense.

### 100 responses to “Thirteen”

1. nickleaton

And then you check back historically, to see how past years perform. In what percentage of years would you expect similar.

Remember, the more records you have, the more that will be broken.

2. There is spatial correlation too.

[Response: We’re only looking at one region.]

3. Marco

That comment from Brown is indeed hilarious. It certainly puts Nielsen-Gammon’s little series into its proper context. Brown complains about being a denier, he’s just a skeptic, and then goes and makes a mockery of himself again…

4. You don’t note what is perhaps the most elementary error. Willis Eschenbach included the recent run of 13 months out of 13 in the data that he used to estimate the Poisson parameter and therefore the likelihood of getting 13 months out of 13. Almost inevitable therefore that he should find it is not so unlikely to get 13 months out of 13. You can’t test a statistical model on the same data you use to estimate the model.

@cwhope

5. Horatio Algeranon

Masters might not be entirely correct, but at least he’s not a bater.

Unbelievable as it may seem, some actually prefer simple ball park estimates (“roughly 1 in a million”) — coupled with the words “very unlikely to happen by chance” — to endless mathturbation which (quite unlike it’s namesake) never seems to settle down, but instead simply oscillates in and out (between1/10 to 1/1,000,000?)

• Horatio,
Simple, ballpark estimates can be wildly misleading if the underlying assumptions are incorrect.

• Horatio Algeranon

As can “complex” estimates (eg, Lucia’s first estimate)

In this case, the (even approximate) “correctness” of both the simple and the complex estimates actually relies on the very same assumption: that the correlation from month to month is low.

If that’s true, the approximation that the months are “independent” (ie, the assumption underlying Masters’ method) will give a pretty good indication of the overall probability.

A factor of 3,10 or even 100 difference in the answer makes essentially no difference to the overall conclusion that “In an unchanging climate, it’s very unlikely that what happened would have occurred due to chance”.

The issue in this case amounts to more than simply getting the best (most accurate) answer.

It’s one of illustrating the concept in simple terms that most people (even people who know little math) can understand.

In Horatio’s opinion, the never-ending discussions about temperature trends obtained by linear regression, significance levels, etc suffer from the same problem: they open up cans of worms that can only be “contained” with more sophisticated math…

… when relatively simple comparison of the change in the average over time* is probably the most effective (most convincing to the non-mathematical public, anyway) way of illustrating the fact that the temperature is rising (*as Tamino has shown on several occasions to great effect)

• Horatio Algeranon

Five Years is a perfect illustration of “change in averages over time” (referred to above)

From the standpoint of communicating the concepts simply and clearly, you can’t get any better than that.

6. Who is Willis Eschenbach when he’s at home? A quick Google turns up a man trained in massage. (At one of the country’s leading massage parlors. The Aames School of Massage in Oakland, California.) Is that the man who is shaking climate science to its foundations?

[Response: Don’t know, don’t care. Let his arguments stand or fall on their merits.]

• For those of a curious bent – Qualifications in Massage & a BA in Psychology & employed by South Pacific Oil according to
http://www.desmogblog.com/willis-eschenbach#s1

His fist outing as a mock-sceptic discussed here
http://www.realclimate.org/index.php/archives/2005/01/peer-review-a-necessary-but-not-sufficient-condition/comment-page-1/#comment-910
work that dated from 2002 according to
http://www.economist.com/blogs/democracyinamerica/2009/12/trust_scientists?page=4

• It would be a little ironic, though, as it would imply two distinct kinds of massage…

• joeldshore

Since I am one who has interacted with Willis a lot on various blogs (Anthony’s and elsewhere) and we have at least a quite cordial interaction, I thought I would add a bit here: I think Willis sees himself as a self-trained scientist / Renaissance man. He’s done a little bit of everything in life…often calling himself a “reformed cowboy” or something like that.

He is certainly very limited in his formal scientific and mathematical training but is clearly also an intelligent fellow and capable of being dragged to the truth on certain aspects. He in fact invests quite a bit of his “capital” over at WUWT in trying to convince others of their mistakes in claiming that humans are not responsible for the rise of CO2, that the greenhouse effect is not real, or that Nicholas and Zeller’s paper ( http://wattsupwiththat.com/2011/12/29/unified-theory-of-climate/ ) makes any sense whatsoever. This is partly why the two of us have earned a somewhat grudging respect for one another since I have wasted a lot of time there on the same side of these battles with him.

However, he quite stubbornly believes what he does believe, which is basically that climate sensitivity is very low (or more precisely, perhaps, that the climate system has a “thermostat” that makes the sensitivity essentially zero) using many poorly-reasoned arguments that I and whoever else has tried have had almost no success in disabusing him of. [Interestingly, he has occasionally displayed similar stubbornness in the other direction, e.g., it took a fair bit of work to disabuse him of the notion that Nikolov and Zeller had made a mathematical error in their calculation of something that they had actually calculated correctly given their bizarre, implausible (and mainly unstated) assumptions.]

And, whenever some new paper comes out that is inconvenient to the basic skeptic cause, he is quick to pounce on it with some analysis which is often clever in certain ways…but leads him very quickly and carelessly to the desired conclusion that the paper can be promptly dismissed. This is, of course, a quality that endears him to Anthony Watts.

My impression is that Anthony holds Willis in very high regard and that indeed Anthony put lots of faith in his opinions, probably the main reason why Anthony subsequently decided to drop further Nikolov and Zeller postings and leave such complete and utter garbage to Tallbloke’s blog. (Actually, people at Tallbloke’s have spoken derisively about Willis’s ability to influence Anthony in this way.)

• Joel, Could you perhaps persuade him that he is in error by pointing out that the Poisson distribution applies to independent, identically distributed events (IID), and therefore cannot be applied in a situation where there is significant autocorrelation? While that is only one of many errors in Willis’s presentation, it is the most obvious and indisputable.

• Dano

I have interacted with Willis many times in the past. He is an arrogant, blowhard engineer who is the living stereotype of the arrogant engineer whose training is best and whose answers are always correct.

I don’t interact with denialists on websites too much any more, so he and the others may have changed, but IME what I described is a personality trait.

Best,

D

7. BCC

More fundamentally: Poisson distributions come from Poisson processes. Poisson processes arise, by definition, from independent events. But, as is noted here, the data are not independent. So, you can’t use a Poisson process with any certainty (and, as a few at WUWT have pointed out, if the data was independent, we can calculate lambda from the problem definition).
So Willis has also fallen into the same autocorrelation trap that Masters did, has he not?

8. Tom W.

I think that Eschenbach and Brown using their formidable logic have just proven global warming. Per Brown’s comment on Eschenbach’s analysis…

“That is, the most likely number of months in a year to be in the top 1/3 is between 1/3 and 1/2 of them!”

9. Patrice

A small question about autocorrelation, if I may: Is maybe there a global auto correlated noise due to “slow response of the ocean due to enormous heat capacity” and so land stations should show less auto correlation comparing to stations at the coasts or ocean data ? Is there any good study of this already avaliable ?

[Response: First impression: I would count such an effect as being part of the signal — due to the delayed response of the oceans to external forcing — rather than part of the “noise.” But it’s a good question which deserves more careful thought.]

10. Rod Everson

Tempest in a Teapot Time: Clearly the data being used in the analysis isn’t independent because, as even most skeptics would agree, temperatures have risen over the past century (though not as much as asserted.) Skeptics even acknowledge that mankind has had an influence on those temperatures. Therefore, you’d expect to routinely encounter events that you would otherwise hardly ever encounter were randomness actually present.

Now, the monthly change in temps might indeed be random, or close to it, but whether a datapoint is in the top third of the entire data record can hardly be random when a trend is present in the data. The DJIA, for example, at its top, put out month after month after month of datapoints in the top third of the life of the average, which is all that is going on here, though involving a much lesser trend in the data.

If one had published, based on a similar analysis in, say, 1998, that the DJIA could only be expected to generate similar results once in the next 100,000 or so years, and they repeated the result month after month regardless, would anyone have spent more than a minute considering the paper before tossing it in the ashcan? Especially since 14, 15, and 16 month events were being added as well, and as, I suspect, might also happen in this case. July has been a pretty hot month, nationwide, after all. Are we about to hit 14 in a row, something that won’t happen again more than once in every 400,000 years or so?

Nothing is being added by continuing the dissection of this issue. Tempest in a teapot, truly.

[Response: I quite agree that we can reject the “no climate change” hypothesis (on which the whole issue is based) for many reasons, so this doesn’t tell us much (if anything) about global warming.

I don’t agree that “Skeptics even acknowledge that mankind has had an influence on those temperatures.” They do when they’re forced to — but turn around and do the opposite at every opportunity. Anthony Watts, for instance, claimed he has never denied that the globe is warming, but he regularly does posts designed to dispute exactly that, and a reader actually found a quote from him (in his “paper” with Joe D’Aleo for SPPI) which does so explicitly

And if we all acknowledge that in a no-warming world what’s happened in the USA over the last 13 months is exceedingly unlikely, why is Watts trying so hard (two posts now) to suggest that it’s not so unlikely?

If you can get the fake skeptics truly to abandon suggesting that the globe hasn’t warmed, then …]

11. Rod Everson

I was speaking in terms of the settled science that increasing the volume of CO2 in an atmosphere will have a greenhouse effect. I believe you will find that most skeptics will acknowledge that fact.

What they also allow for, however, is the possibility (probability, actually) that the effects of the acknowledged increase in CO2 in the atmosphere might be being offset by other effects, effects not generally included in the models. I doubt that you have a quote from Watts indicating that CO2 is not a greenhouse gas, or that greenhouse gases will not warm the atmosphere, though I could be wrong.

Furthermore, skeptics generally, and I believe legitimately, argue that the past temperature records have been sufficiently massaged so that the extent of any warming, presumed to be caused by CO2, could easily be overemphasized, or else, that the existence of offsetting effects is harder to detect as a result. Now couple that with the fact that most of the models dramatically overestimated the warming over the past 15 year and the skeptics’ position is not a hard one to hold.

[Response: Begin with the fact that most of the models did not dramatically overestimate the warming of the past 15 years, and you can start to tell the fake skeptics from the real ones.

I’ll leave it to others to point out your adept combination of zombie talking points and goal-post moving.]

• Rod is passing out skeptic chum today.

• dhogaza

What they also allow for, however, is the possibility (probability, actually) that the effects of the acknowledged increase in CO2 in the atmosphere might be being offset by other effects, effects not generally included in the models.

So give us a list, Rod, of:

1. feedbacks known to mainstream climate scientists
2. feedbacks understood by deniers that are not known to climate science.

and tell us why it is “probable” that the sum of feedbacks would be negative.

• P. Lewis

feedbacks understood by deniers that are not known to climate science

Here’s a couple!

• RE: What they also allow for, however, is the possibility (probability, actually) that the effects of the acknowledged increase in CO2 in the atmosphere might be being offset by other effects, effects not generally included in the models.

BPL: Have you ever heard of “analysis of variance” or ANOVA? There could be a million things that effect climate, but they don’t all affect it EQUALLY. For example, carbon dioxide accounts for 76% of the variance of temperature from 1880 to 2010. That means all other causes, *known and unknown,* can only account for 24% at most, at least in that period.

• chrisd3

76%? That’s very interesting. Can you point to a writeup of that somewhere? Would like to understand how that’s derived.

12. fredb

Willis Eschenbach is digging a grave for himself and doubling down on his analysis … a string of lengthy replies from him in the comments list over at WUWT. Seems like he still maintains what he did is sound, and where he does acknowledge problems with what he’s done, he terms them “trivial”.

But quite entertaining, nonetheless. And instructive for use in class.

13. Repro Munchkin

“The observed standardized anomaly from June 2010 was 0.9724”.
Should this be “June 2011”?

[Response: Yes it should. Fixed,]

14. Robert Murphy

Another great comment on that thread WUWT:
“That the world is warmer this decade than last is not surprising as we are still probably recovering from the Little Ice Age and will be until we are not. No controversy therefore and no surprise that “extreme events” are happening.”
http://wattsupwiththat.com/2012/07/10/hell-and-high-histogramming-an-interesting-heat-wave-puzzle/#comment-1030123
We will be until we are not. Now that’s some deep analysis for ya. BTW, how long before the guy gets called out for admitting that this last decade was warmer than the one before? That’s not in keeping withThe Narrative! :)

• Bernard J.

“That the world is warmer this decade than last is not surprising as we are still probably recovering from the Little Ice Age and will be until we are not. No controversy therefore and no surprise that “extreme events” are happening.”

Except that they hooted loudly for years about how it’s been cooling since 1998…

Perhaps we’re “cooling since 1998 until we are not”, and after that it will be the Little Ice Age “recovering” again. All without any acknowledgement of the ‘greenhouse’ effect, of human carbon dioxide emissions, and of the relative contribution of all known positive and negative forcings.

It must be wonderful to be simultaneously so clever and a non-expert. I wonder why they don’t all put up shingles and make millions doing brain surgery…

• Gavin's Pussycat

With further “recovery” expect the little ice age to become bigger than the big one

• Funny, I just remarked yesterday on RC how incompatible the “warmer temps in Roman times” meme is with the “we’re just recovering from an Ice Age, of course it’s warming” meme.

AFAIK, the former is more nearly correct than the latter: pending correction or update, my understanding is that the height of the post-glacial warming was about 8 millenia back.

So I suppose “we will be until we are not” is incorrect primarily in the verb tenses involved.

15. Ken Fabian

Has the level of acceptance of the climate problem by Americans been changed by these extreme weather events? Will the response be greater acceptance of mitigation policies or will it fuel demand for adaptation such as tougher more resilient electricity supply that will keep air conditioners working – at least cost and regardless of emissions?

I used to think it would take some in-your-face climate consequences to spur people to do the right thing, but my cynicism leads me to suspect they will spur people to keep doing the wrong thing, only try to do it harder.

• thrig

I recall some recent surveys showing more acceptance of climate change, mostly from things like “golly, this winter was unseasonably warm” or “gee, these trees keep blooming earlier and earlier” type realizations, which in turn might tweak the Overton Window towards acceptance of mitigations. Generational gaps might also be interesting to look at.

On the other hand, alcoholics may never cleave from the bottle, and living a low carbon life in America can require various limitations and complications the built environment disfavors (e.g. not owning a car).

16. Tamino says: why is Watts trying so hard (two posts now) to suggest that it’s not so unlikely?

Because the denial campaign recognises that extreme weather events are their weakness. They have to try very hard to make it seem that there’s nothing unusual going on, because extreme weather is very effective at shifting views on the reality of climate change. Hence the stream of “talking points” and shonky analysis.

They’ve been forced on to the back foot (as they say in cricket). Let’s keep ’em there…

• joeldshore

Yeah…Watts seems to have lots of very shoddy posts basically following the same line of illogic, which is to show that a particular extreme event was due to an extreme weather pattern and therefore must have nothing to do with global warming. [Because apparently, under global warming, we are expecting to have the most extreme heat wave of the century for some region occur when the weather pattern is completely average (or maybe even favors cooler than average temperatures).]

• Bernard J.

…the denial campaign recognises that extreme weather events are their weakness.

Another way of phrasing it would be to note that the denial campaign recognises that evidence is their weakness…

• trrll

In the past, much of the evidence has been technical and statistical beyond the level of knowledge of the general public, so it is relatively easy to confuse them. But most people have decades of experience observing their local weather, and once they begin to notice that “the weather isn’t what it used to be,” it becomes much harder to obscure the fact that climate is changing.

• Gavin's Pussycat

…but it is refreshing to see them finally highlight the importance of natural variability

• chrisd3

They have to try very hard to make it seem that there’s nothing unusual going on

This is practically Goddard’s whole schtick. Vast numbers of posts about how it was hot in East Mudflap, IA, on June 17, 1927.

• Natural variability?

Heck, in their own minds, they invented *everything* about climate science–UHI, paleo record, CO2 saturation, AMO, you name it.

17. Tamino, Lucia did not say the odds are around 1 in a million. Nor did she say she was getting results in the 1 in a million range. Rather, she said:

“The probability will be lower than 1 in 1,594,232— but not[e] it will be closer to 1 in 10^6 than 1 in 10!”
http://rankexploits.com/musings/2012/one-in-1594323-chance-heat-wave-not/#comment-99218

She later updated this to indicate she was getting a result between 1 in 10 thousand and 1 in 100 thousand. Her final result, so far as I can determine was that:

“if R=0.150, the probability of 13 months all in the top 1/3rd and no climate change is roughly
1 in 134381.”
http://rankexploits.com/musings/2012/one-in-1594323-chance-heat-wave-not/#comment-99235

That turns out to be much closer to 1 in 10 thousand than 1 in a million. It is more than 10 times Master’s initial estimate, and more than 3 times yours. Given the significant discrepancy between her claims and results and your representation of those claims, an update of your post is probably in order.

[Response: I don’t see how 0.00000744153 (1 in 134381) is closer to 0.0001 (1 in 10000) than to 0.000001 (1 in a million).

I prefer her Monte Carlo method to my own calculation. But she’s certainly gone through enough revisions since the original “1 in 10.” If “1 in 134381” is her latest estimate, great.

The point is that under the assumption of no climate change, the recent data are extremely unlikely — the chance is nowhere near Lucia’s original 10% or Eschenbach’s ludicrous Poisson-distribution nonsense.]

18. n-g

It’s all a matter of which question is being asked, and few have done a good job of sufficiently explicit framing. Here are five possibilities for United States temperatures, and I’ll ignore the detail of whether or not the starting month is constrained to be June:

Q: What are the odds of thirteen consecutive months being in their top 1/3 if the temperatures for each month were rearranged randomly over the instrument record?
A: 1:1,594,323. (NCDC)

Q: What are the odds of thirteen consecutive months being in their top 1/3 in a climate with the observed autoregressive characteristics but no underlying trend?
A: 1:458,000. (Tamino)
A: 1:2,000 to 1:166,667, depending on the assumed characteristics. (Lucia)

Q: What are the odds that a randomly selected year during the instrumental record includes the start of a period with thirteen consecutive months in their top 1/3?
A: Precisely 1:117. (Me.)

Q: What are the odds of the past thirteen consecutive months being in their top 1/3?
A: Precisely 1:1 (Me)

I can’t figure out how to frame an intelligent question for which Willis has computed the answer.

• Nigel Harris

As far as I can see, Willis Eschenbach has framed and answered the question:

Q: What is the probability that a dataset of 116 years of monthly records, that has a near-identical distribution to the instrumental record (which actually contains exactly one period of thirteen consecutive months in their top 1/3) containing a period of thirteen consecutive months in their top 1/3?

A: Not surprisingly, he finds such a dataset is likely to contain a single period of 13 consecutive months in their top 1/3.

As several commenters have more or less kindly pointed out to him, this is a tautology. But he still seems to think his analysis is meaningful!

19. Dave123

I’d like to ask a different question than n-g. If we were to assume a random process with some autocorrelation we get something like Lucia’s result. Now if I imagine a forcing imposed on top of such a random process, I would expect that as time goes on, the probability of n-consecutive intervals where the temperature of each interval in the cluster of n is in the top X% of will approach unity.

So, in the historical record, while we are charmed by a lucky 13 consecutive months of top 1/3 temperatures….we should also expect to see a) as we move to the present the frequency of top 1/3 months in a 13 month periods to go up from the level of autocorrelation towards unity and b) the length of intervals of top 1/3 temperatures in consecutive months to go up (ie increased autocorrelation). (Not having taken statistics at this level, I’m fumbling with words here I know).

Question: Could either/both of these kinds of data be used to extract an estimate of the amount of forcing going on? Could you add it on to Lucia’s Monte Carlo methods combined with some sort of simplex optimization routine where you make some guesses at forcings, run the simulations, determine a fit to reality by some sort of objective function and then adjust the guess on forcing…or is there a way to calculate this exactly? (Given only 1 forcing I know simplex is a bit of overkill, but for me it’s hammer/nail question).

[Response: I doubt you could estimate the *forcing*, but you might devise a scheme to estimate the warming rate. But there are better ways to do that. In any case, the “null hypothesis” (unchanging climate) is rather effectively rejected.]

• Dave,
I’ve looked at some similar problems in the past, and the question you come up with is what probability do you want to assign to the outcome you observe, and then, how do you do so in a way that takes into account the bias introduced by the fact that we live in one possible realization of all possible outcomes.

Let’s say you run a Monte Carlo and introduce a linearly rising temperature. You want there to be at least a 5% chance of observing our outcome and vary the slope until you get there. Now you have a slope, but how do you interpret your result? How do you justify 5%? What is the error on your result? And so on.

20. KR

Eschenbach has essentially fitted a curve to the observed data, and then made an estimate of the observations from that curve. The fact that his estimate is 2.6 (fairly close to 1:1) is thus unsurprising – it’s a tautology.

Absent, of course, of any test for normal distributions (scale or otherwise), or any other probablistic checks. It’s observations vs. observations, rather than statistics.

[Response: You’re quite right. He simply counted how many occurrences there are and then used that to estimate the probability, quite ignoring the issue, which is: how likely was this in an unchanging climate? He also botched the estimate — the Poisson-distribution model is absurd.

Anomalies are not normally distributed because the variance is different at different times of years. But standardized anomalies are normally distributed (at least as far at the Shapiro-Wilk test is concerned).]

• Yes, this is the point I was making yesterday at 3:09 pm. Your statement of it is stronger and clearer than mine. Has it been recognised and accepted at WUWT yet?

@cwhope

• KR

Has it been recognised and accepted at WUWT yet?

Some of the posters on that thread have realized it – Eschenbach appears to be clinging to his statements, and I don’t really expect him to admit any errors.

21. Tamino, sorry, my mistake re the comparison. And thank you for the update.

22. cynicus

Please forgive me if this is a silly question, but 1 in 100.000, what is meant by that? One in 100.000 months? One in 100.000 years? One in 100.000 13-month strings? Thanks!

• In 100000 realizations of our “climate experiment” with no warming, only one would have 13 consecutive months all in the top third of similar months (e.g. top third of Junes, top third of July’s….)

• Stu N

It’s the probability that any particular 13-month period will have all 13 months in the top 1/3 of the distribution.

• Why would auto-correlation matter? You’re not comparing August to March. You’re just comparing August to other Augusts, May to other Mays. Yes, warm months auto-correlate with warm months, but that’s also true of cold months and temperate months, etc.

• Jeff, Not really–you are looking at a series: June, July,…June. So, if the first June is hot, the first July is likely to be as well. Autocorrelation is very important not just month to month, but over several months. It really depends on the memory in the system.

• cynicus

Thanks all, it is a bit clear now, I hope.

Next question: what is the chance if we include global warming in this picture. How much more probable has this become?

23. Horatio Algeranon

Don’t fall for Watts
It’s a doozy, it’s one in a million temps, it’s a doozy.
Why would I lie, why would I lie?
You can say anything you like, but you can’t touch the Monte-Carlo dice.

24. BBP

Does the fact that this is a post hoc probability change how we should look at it? Assuming a random distribution with uncorrelated noise the odds of any distribution into thirds (top, middle, bottom) of any 13 month period would be 1:1.6 million. We are deciding after the fact that this particular distribution is interesting – I vaguely recall that this should be taken into account.

25. KR

Masters originally stated:

“Each of the 13 months from June 2011 through June 2012 ranked among the warmest third of their historical distribution for the first time in the 1895 – present record. According to NCDC, the odds of this occurring randomly during any particular month are 1 in 1,594,323. Thus, we should only see one more 13-month period so warm between now and 124,652 AD–assuming the climate is staying the same as it did during the past 118 years.”

The claim Eschenbach is now pushing is:

“Masters was NOT setting out to prove the climate was warming, that’s totally contradicted by his own words. He was claiming that in the current, warming climate, the odds were greatly against 13 being in the warmest third. They are not, it’s about a 50/50 bet.”

Absurd, but that’s where Eschenbach is hanging his hat.

• pjie2

Quite – apparently “staying the same as it did during the past 118 years” now means “in a warming climate”, since everybody knows the climate has warmed during that period…

26. From a strictly statistical standpoint:

How confident can one be that attempting to account for autocorrelation in this case is going to actually give a “better” (more accurate) estimate than ignoring it? (as the NCDC estimate quoted by Masters does)?

What does the Durbin-Watson (or perhaps another) test say about the “significance” of autocorrelation in this case?

[Response: We can say with supreme confidence that positive autocorrelation increases the odds of a “run of 13,” so that the original estimated probability was too low. We can also say with confidence that the if the “null hypothesis” (unchanging climate) is true then the autocorrelation is positive. In fact we can confidently assert it even if we remove a nonlinear long-term trend.]

27. Open question: where did I go wrong?
http://rhinohide.wordpress.com/2012/07/12/eschenbach-poisson-pill/

[Response: Seems like a reasonable approach. Nobody has an exact answer yet, and different estimates vary by a lot, but the one common thread to all the realistic estimates is that the probability is damn small.]

28. Martin

Why is the month-to-month autocorrelation so large for the global temperature and so small for US48?

29. TLM

Inevitably, because the internet is a largely US based phenomenon (particularly the Blogosphere), there is huge concentration on one unusual event in a relatively small part of one continent. Climate records are broken all the time and everywhere simply because we only have about 60 years of reliable data. At the moment in the UK we are suffering record levels of rainfall and low temperatures this summer. As far as I know nobody is seriously arguing that this is due to “global cooling” despite the predictable jokes in the pub.

The big argument is not about “New England” warming or “Old England” cooling it is about “Global” warming. Currently it looks like June and July are going to be pretty unremarkable in context looking at the whole globe.

I have been reading the scientific literature since the last IPCC report as well as following many blogs both pro and anti and I just hold my hands up in despair.

I particularly dislike the tone of the comments on certain blogs where slagging of the other commentators seems to be more important than finding the truth, and I am afraid WUWT and this one are two of the worst. Both sides suffer from very bad cases of cognitive bias, always dismissing out of hand any argument that goes against their “belief system”.

So what do we actually know?
1. The globe has warmed by about 0.5c since the first decade of the 20th Century, slightly more going back to 1850 if you extrapolate the very sparse data we had then over the whole globe.
2. The ice cap is melting and summer snow in the NH is disappearing.
3. CO2 is a greenhouse gas that helps keep the atmosphere warm and the amount of CO2 in the atmosphere is rising due to man burning fossil fuels.

Some scientists have very reasonably linked points 1 and 2 to point 3 and are making great efforts to prove the linkage. Others have postulated very high sensitivty in this linkage which would lead to a global catastrophe. Actually, what few people understand is that the high sensitivity is not to CO2, it is to warming. Warming causes more warming, and cooling causes more cooling, whatever the forcing (CO2, the sun, volcanoes).

However, as far as I am aware, very few real scientists in the field are making the same great effort to disprove the linkage or the high sensitivity, and that is very, very bad for the science.

At the moment the very vital sceptical process is being left to amateurs and deluded deniers on blogs.

When quantum mechanics was first proposed everybody went out of their way to try and prove it was wrong. And many scientists were slightly disappointed when the Higgs Boson was found, as disproving the Standard Theory would have been much more fun than proving it. So much so that they are demanding a 5 sigma proof! Climate scientist can only dream of such certainty.

From where I sit there seems to be a gradully building body of evidence that the global troposphere temperature might not be quite as sensitive to warming as we first thought – the main item of evidence being that since 2000 the global temperature has not been following the track predicted by the early models despite a much faster rise in CO2 levels than predicted.

And yes, I have read your paper that it is all down to La Nina and volcanoes, and it is a good addition to the argument, but it is by no means proof – certainly not to 5 sigma!

But where are the peer reviewed papers arguing the contrary position?
Why are all the scientists so afraid to test these theories really hard?

If global warming reverses, stops or even slows down then I am very worried this will discredit the scientific process irrevocably. Do you really want Watts, Spencer and Lindzen to be the only ones to have been on the correct side of the argument?

Roy Spencer and Lindzen are minnows in the field and tainted by political and religious bias. Watts is an irrelevence whose argument is ruined by the crackpots and amateurs he allows to post articles on his site. We need real scientists doing real, challenging and absolutely vital, sceptical science.
Scientists need to stop being afraid of being wrong!

[Response: Is it possible … that I’ve been visited by a real skeptic? It remains to be seen — but I’ll keep an open mind.

Despite the absence of the usual ludicrous denier zombie talking points, your argument contains mis-statements and flawed logic. Since the response it deserves is rather lengthy, I’ll make a blog post to do so.]

• An excellent post, TLM, although not at all on this topic. I look forward to seeing the blog post it has prompted.

• TLM, OK, first, where the hell are you getting your information, because I see in your portrayal absolutely no resemblance to the science I see being done.

TLM: “At the moment the very vital sceptical process is being left to amateurs and deluded deniers on blogs”

Absolute horse puckey! Every clijmate reconstruction, every new satellite measurement, every borehole dug, every phenological investigation, every determination of climate sensitivity has the potential to change the model. Barton Paul Levenson has a list of confirmed predictions of climate models. Read it:
http://bartonpaullevenson.com/ModelsReliable.html

Any one of these investigations could have gone the other way. The reason why the overwhelming majority of scientists support the current consensus model is because the overwhelming majority of evidence supports it.

5-sigma proof–Oh, come on! No scientific experiment gives 5 sigma proof. The reason why particle physics demands it is because the analyses they conduct See this article from Physics Today:

I would suggest that you talk to some actual scientists rather than relying for your opinions of science on morons writing in the popular press who insist on false balance.

• TLM

“Barton Paul Levenson has a list of confirmed predictions of climate models.”
Does he have a list of unconfirmed predictions of climate models?
Or is he saying that all the predictions from climate models have been confirmed?
We need to see both lists so that we can decide whether the “wins” outweigh the “losses”. That is what I find so frustrating, I cannot go to one site and see both sets of evidence.

Everybody seems to have a view and is only putting forward the evidence that supports that view.

• TLM: Both sides suffer from very bad cases of cognitive bias, always dismissing out of hand any argument that goes against their “belief system”.

BPL: This is just wrong. We do not dismiss arguments out of hand. But once we have analyzed an argument and found it to be dead wrong, we won’t revisit it. That’s not cognitive bias, that’s bloody common sense.

30. Bernard J.

Response: Is it possible … that I’ve been visited by a real skeptic? It remains to be seen — but I’ll keep an open mind.

Given some serious errors of evidence and of logic, my response to the question is a firm “no”. However, you do seem to have landed a remarkable tone troll – polyphonic, I’s say…

And I note that his arguments are subtlely structured to poison in advance the well of anyone who disagrees with the post. Heartland must be wishing that their staff were this good.

• TLM

Tone troll? What on earth is that? I am not subtly structuring anything. I certainly don’t want to poison the discussion if you disagree. I am sure most people around here would!

You are inferring negative intentions that are not there. I am convinced that global warming is happening and our emissions of CO2 are a major contributor to that. I think we should be reducing our dependence on fossil fuels and developing alternatives with a lower environmental impact. I hate the way we are despoiling our environment, particularly the mass slaughter in our seas. I have shares in two companies developing innovative low voltage lighting. I am a member of the RSPB and Greenpeace for gods sake!

However I think the jury is out on the degree of climate sensitivity to forced warming / cooling and am worried that the only people questioning it seriously (or that get any publicity) are not true sceptics but are simply trying to push an agenda rather than get to the truth.

However, if I do a simple 15 year linear regression on the HadCrut3 monthly series I see that the slope has turned negative. Sea level rise, far from accelerating is decelerating. Despite local hot spots, global tropospheric temperatures look pretty stable. If the climate is so sensitive to warming then surely the sudden warming in the ’80s and ’90s should be leading to even faster warming by now?

[Response: It’s time for skepticism to cut both ways. What is your basis for this claim? Have you investigated how long data can make non-warming persist even in a warming climate? Do you really know how fast warming is predicted to accelerate? Did you compute the uncertainty level of that “turned negative” slope? What’s the slope for the same time span in the other global records? Who picked “15 years” — and why? Or did you just hear something like that from a fake skeptic and find the argument persuasive (which is quite understandable)? You might find this informative. And by the way, sea level rise is accelerating, not decelerating.

You seem genuinely to want to get at the truth. That begins by getting the facts straight.]

Scientists seem to be trying to “explain this away” rather than looking at it and seeing if it is evidence of lower climate sensitivity. Why should this be the left to amateurs and crackpots?
Is there no room for doubt?
Does anybody who questions this have to be a “tone troll” or “denier”?
Am I asking this on the wrong web site? (Probably!)

[Response: This is the right place to ask.]

• TLM, a tone troll is one who ignores the substance of an argument and focuses instead exclusinvely on how nasty/elitist/… the poster was. If you are interested in truth, you take it where you find it. It doesn’t matter if the source is “nice”.

Hmm. So many questions. Why 15 years? Why HADCRUT3 when it has been superceded by HADCRUT4? Why specifically the 15 years from 1997.5-2012,5–which start with a very strong El Nino and end with La Ninas?

You keep asking for the “other side”. Why do you assume it is there? Why do you suppose 97-98% of climate experts say it is not? Has it occurred to you that one side has facts and evidence and the other has lies and stupid?

31. Assuming identical odds for temperature being higher, lower, or average for each month and no impact from month to month, the odds of any 13-month period are equally unlikely. If we get a period of:

Warmer-Warmer-Warmer-Warmer-Colder-Colder-Colder-Colder-Average-Average-Average-Average-Warmer

…we are still experiencing one-in-a-million odds. In fact, -every- 13 year period will have these odds.

32. TLM

I am flattered, [Blush], and looking forward to your post.

Surely every scientist is a sceptic? “Real” sceptics should not be such rare beasts and should not be afraid to label themselves as such.

I am no scientist, just an interested lay observer trying to pick my way through this particular minefield. I am always looking for polite and reasoned discussion.

To me the web site “Skeptical Science” has a better tone than most – seeking to explain the science to the non-scientist – and it makes a very good job of debunking the usual mistaken theories. I do sometimes wish it was a bit more sceptical though. It always seems to make the assumption that all arguments against AGW are, de facto, wrong. You just need to find a suitably convincing argument why.

I am worried that scientists are afraid of taking a properly critical and sceptical look at others work on climate in case they end up being dumped in the same camp as Lindzen, Christy, Spencer et al. Is that a fair accusation?

• TLM: It always seems to make the assumption that all arguments against AGW are, de facto, wrong.

BPL: They ARE de facto wrong. Do you understand what “de facto” means?

If bloggers with some competence in climate science tend to dismiss an argument a poster raises, chances are more than 99% because that argument was raised before, years ago, thoroughly refuted at the time, and has been repeated 10,000 more times on the blogosphere. One gets sick of killing the same vampire over and over and over and over and over again. They should just stay dead once you pound in the ash stake, stuff the mouth with garlic, sever the head, place it face down in the coffin, and throw the coffin into running water.

• TLM

BPL: They ARE de facto wrong. Do you understand what “de facto” means?
Er, no actually. Didn’t do Latin at school and it obviously does not mean what I though it did! Let me try again, “all arguments against AGW are, per se, wrong.”
I am just giving you my impression as a lay observer about how their attitude to apparently contrary evidence comes across. I find the content on the site very helpful though and I am glad it is there. Probably the best of the bunch as far as I am concerned. Keeps all those vampires well and truly nailed down!

• TLM of Skepticalscience.com: “I do sometimes wish it was a bit more sceptical though.”

Really, do you wish scientists were also similarly skeptical of the existence of atoms? Of relativity? Of the electron, perhaps? The evidence supporting all of these theories/constructs is contemporaneous with the evidence supporting anthropogenic warming. The fact is that all of the arguments advanced by the denialists so far have been wrong and demonstrably so–and yet they keep advancing them again and again.

TLM: “I am worried that scientists are afraid of taking a properly critical and sceptical look at others work on climate in case they end up being dumped in the same camp as Lindzen, Christy, Spencer et al. Is that a fair accusation?

No, it is not. Others’ work? Whose? I would be willing to bet that any climate scientist worth his salt is keeping up with the scientific literature on the subject? What else should they be looking at besides the peer-reviewed literature?

Now I’m gonna throw our an absolutely radical concept here. Did it ever occur to you that the reason why the overwhelming majority of scientists might believe in anthropogenic climate change could be because the evidence is overwhelming? Did it ever occur to you that after accumulating strong evidence for a while–say 116 years–that a concept might cease to be controversial among those who actually understand the evidence?

33. This, from KR, sums it up well. I re post it here in case he is too modest to do so:

“The other issue I have with this thread is that your Poisson fit is purely descriptive – the observations fit a curve which predicts the observations, in a dog-chasing-tail fashion. I got roughly the same quality of fit with a cubic spline, and with a skewed Gaussian. In each and every case that description of the data has a close to 1:1 match to the observations it’s derived from.

But the whole discussion is about how likely those observations would be given the full record and the observed variance. For that you need a prediction (not a derivation) from the statistical qualities of the data, and you have not done that half of the investigation. The only thing you have stated is The observations closely resemble… the observations. That’s not a probability test.”

@cwhope

34. MO

TLM said: I am worried that scientists are afraid of taking a properly critical and sceptical look at others work on climate in case they end up being dumped in the same camp as Lindzen, Christy, Spencer et al. Is that a fair accusation?

Quite the opposite really. See Richard Muller. He was a darling of fake skeptics when he said some rather shabby things about climate scientists. He didn’t make many friends among the latter, but I don’t see him being lumped in with the Spencers of the world.

Of course, he ended up confirming the work of those he trashed, so he’s no longer welcome in the fake skeptic community.

-MO

• Exactly, MO! TLM, Muller is the perfect example to examine. His attitude represents the one I think most reputable climate scientists would also represent: you go where the data tell you to go.

I think snarkrates has made some really good points. And as for sensitivity, I’m not aware of any reputable climate scientists who would assert we know with absolute certainty what the sensitivity is, and that’s why it is presented as a range of possibilities.

35. TLM, I believe there are good reasons that you find “the assumption that all arguments against AGW are, de facto, wrong” at SKS and elsewhere.

For a start, the idea that the earth’s outer layers are currently out of thermodynamic equilibrium, and warming as they adjust towards equilibrium, is well established by multiple lines of evidence. So arguments against the “GW” part of AGW would have to reach a really high bar to be persuasive…it would really fly in the face of a lot of well-established physics, not to mention the observational data. Nevertheless you see these arguments very frequently. They are typically based on observational data from too short a timeframe to separate the signal from the noise, or non-physics based mathematical analysis which ignores important constraints of the actual system, or both. After a while, one learns to recognize these zombie arguments quickly, and there seems little point in treating each wave of them as if they deserve a fresh look.

And then there are the arguments targeting the “A” of AGW, i.e. the role of human activity in the current state of disequilibrium. The same factors apply. The effect of human activity on the makeup of the atmosphere is well known, some of the impacts of those changes on the behavior of the system are very well understood, and some (like cloud effects) less so, but it is well-established by multiple lines of evidence that the net effect of human activity is warming. Persuasive arguments against this would have to meet a high bar. Unpersuasive arguments on this point are common and with experience become very predictable. Hence they are generally quickly dismissed.

So, in short, I think you would find that plausible arguments that have not already been addressed multiple times will not be assumed wrong, de facto. And, I think that to dismiss implausible arguments which one has already analyzed in multiple forms in the past is a mark of rationality, not of closed-mindedness.

• TLM

What a brilliantly argued reply. I cannot fault it (not that I would want to). I am sure most of my nagging doubts can be rationally countered. As discussed earlier part of my worry is that of a dismissive and off-putting tone that seems to treat the poster as patently stupid which tends to put people’s backs up. I am afraid in this thread it is Snarkrates that exhibits this characteristic most obviously. He makes some very good points but with a dismissive tone. It makes no difference to what he says, it just makes one less inclined to read his posts (I have done so, by the way, albeit through gritted teeth). He comes across as the kind of guy that I would avoid at a party.

By the way, I use HadCrut3 because I always have, the information is more readily available than HadCrut4 and there is no data for HadCrut4 after 2010 that I can find. I know it adds extra data for the Arctic, that it knocks off the 1998 spike and makes 2010 the warmest year – but as I like to look at trends rather than individual years does it really matter? If the Met office starts updating it as regularly as they do HadCrut3 then I will, of course, start using it.

I still have a nagging feeling that the current 10 – 15 year hiatus in the warming might be more than just another brief step down on the up escalator – this is drawn mainly from a simple analysis of the application of rolling linear regressions at 50, 30, 25 and 15 year periods on the temperature record. In particular the 50 year rolling linear regression seems to show a rough 60-63 year periodicity in rising and falling rates of warming (no long term cooling, just a fall in the rate of warming). The 15 year line has taken a very sharp downward spike rather similar to the one in the 1940s that heralded 30 year period of static or falling temperatures. This is exactly where you would expect it to be if the 63 year “peak to peak” periodicity actually exists. There has not been such a sharp fall in the 15 year linear regression line since the 1950s.

I have no explanation or hypothesis as to what might cause this, it is just an observation. I am hoping that some one has, or is doing, some work on this. I do not have the skill or knowledge, I just like playing with Excel!

If I get time I will post in the other thread with some graphs.

36. agres

The actual odds depend on how much warmer each month was, as expressed in standard deviations. If each month was only slightly warmer than normal, than the odds are lower than if some months were one or more full standard deviations warmer than normal.

By summing up the number of standard deviations that each month was warmer over the entire period, one can estimate how unusual the system behavior was.

37. Martin

It seems that I was unfortunate in posting my question right before TLM’s lengthy post urging this site to be even more skeptical. The ensuing discussion about the right degree of skepticism pushed other matters aside.
So – still hoping for an answer – I’ve decided to repeat my question:

“Why is the month-to-month autocorrelation so large for the global temperature (0.936) and so small (0.150) for US48?”

• Phil.

I’m not sure but i’d assume that the autocorrelation for locations in the ocean would be large compared with land?

• I’m not sure I know the full answer, but a partial answer is: noise. There will be unavoidable weather noise (larger, I think, that any instrumental error) in any temperature series. The more spatial averaging you do, the smaller this noise will be. And the noise is likely to have little temporal autocorrelation, so will tend to destroy the autocorrelation in the “underlying” series, insofar as it makes any sense to speak of an underlying series.

• Laurie

I assume it’s because there’s almost no annual cycle in global averaged T, but a strong seasonality in a region such as the US that is contained in one hemisphere.

• Gavin's Pussycat

Eh, no, not if you’re working with anomalies.

• Gavin's Pussycat

I believe the explanation is surprisingly simple: large-scale patterns take longer to change. A large part of global variability is connected with ENSO, with a time scale of years; US variability would be dominated by the synoptic scale of weather systems, both spatially and temporally. A week or so.

• nickleaton

I believe the explanation is surprisingly simple: large-scale patterns take longer to change.

==============

There is a problem with this analysis. If temperatures too a long time to take effect, you wouldn’t see much day/night temperate variation.

However, there is a rapid response to the sun going down. So the response is fast

• And localized around the terminator…

• @nickleaton: As the spatial scale increases, the amount that the sun “goes down” decreases, until at the global scale, the sun doesn’t go down at all.

• nickleaton

That still doesn’t explain it.

dT/dt = -r(T-t) is a simplistic differential equation. r is the rate at which the earth responds to a change in temperature. Since there is a large day/night variation, r is going to be large. That day/night variation is exhibited all across the earth. So any change is going to be reflected instantly (as far as any long term change).

There are claims for long term lags, and given the rapid response over 24 hours, it doesn’t make sense.

• KR

nickleaton“There are claims for long term lags, and given the rapid response over 24 hours, it doesn’t make sense.”

There is a very simple matter here – large scale variations (night/day) affect a very small thermal mass – see http://www.skepticalscience.com/graphics.php?g=12. When you look at the oceans (>93% of the climate thermal mass), or the ground, they don’t vary much. The atmosphere (2.3%) can vary a lot with very little penetration into the climate as a whole.

The short term large swings are very much the tail on the dog. Given the ongoing warming of the climate as a whole (see http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/) it takes time to see the significant changes from what are rather small imbalances.

So night/day, winter/summer changes are quite large. And they affect quite small portions of the thermal mass of the climate. In order to examine the response of the climate to warming you need to look at both fast and slow responses (http://web.archive.org/web/20100219025332/https://tamino.wordpress.com/2009/08/17/not-computer-models/) if you really wish to understand the response.

• nickleaton

There is a very simple matter here – large scale variations (night/day) affect a very small thermal mass – see http://www.skepticalscience.com/graphics.php?g=12. When you look at the oceans (>93% of the climate thermal mass), or the ground, they don’t vary much. The atmosphere (2.3%) can vary a lot with very little penetration into the climate as a whole.

========

Quite. Now the whole point of GW is that its weather, not that the ground is going to get baking hot. ie. The claim is that its the atmosphere that is warming.

If you look at the day night variation its highest in deserts, lowest over the ocean. However, even over the ocean its still 10 degrees over the course of a day. That’s inspite of the thermal mass. It’s a rapid change.

So if you have a large thermal mass as you say, why will the atmosphere (where the climate lies) change at 10 degrees in 12 hours?

• DSL

By the way, the comment stream on the last link of KR’s is one of the most entertaining — lucia, Andy Pandy, and climategate — all in one thread.

• KR

nickleaton – Regarding short term variations I would strongly suggest you read http://www.skepticalscience.com/trend_and_variation.html – and watch the video. The ocean is the hyperactive Chihuahua (very small percentage of thermal mass) of the climate, and you need long observations (20-30 years) to clearly identify trends.

• Gavin's Pussycat

Nickleaton, that (rapid changes around sunset, e.g.) is part of the climatic background. You should look at variations or anomalies relative to that, because that’s what [autoco]variances are computed from.

• nickleaton

Let me try and express it another way.

Climate = temperature of the atmosphere. That’s why all the records we are talking about are records of atmospheric temperature.

So what process is going on? We have a heating mechanism – primarily the sun, and the atmosphere that is heated by the sun, but also by the stored heat in the ground or ocean. I don’t believe that radioactivity in the earth is significant at the surface.

So if the stored heat is so large, then it will act as a large damper on the temperature variation. ie, the r constant in the differential equation will be small. End result is that you will get little diurnal variation. Or you get longer lags. It means the same thing. Now we do have variations in the thermal capacity. High over the ocean, low over the desert. So higher lags, and smaller r for the ocean, larger r and small lags for the desert. Corresponding to small diurnal and large diurnal variations. To put some numbers on it. 10 degrees for the oceans, 50 for the desert, in 12 hours.

So there is no way that you’re going to get higher lags unless something else is going on.

To elaborate, the same applies for the seasonal cycle. Background, to be subtracted out before addressing natural variability.

Agreed. But its not the argument I’m making. I’m making the claim that season variation will be reflect in hours, not months.

• KR

Gah. Regarding my previous post, the atmosphere is the hyperactive Chihuahua, changing temperature quite rapidly as the shortest response time element – not the ocean. Typing too fast… my apologies.

nickleaton – Fast atmospheric changes are only to be expected, but the long term mean (climate, as opposed to weather) takes much longer to shift, as it’s buffered by the inertia of the 40x larger thermal mass of the oceans.

38. Jon

“So if you have a large thermal mass as you say,…”

If? You aren’t really questioning whether the oceans have a large thermal mass, are you? The heat capacity of water is, after all, a fairly well established scientific fact.

“… why will the atmosphere (where the climate lies) change at 10 degrees in 12 hours?”

Because that’s what happens to air that is in contact with a large body of colder water when you alternately add sunshine, which warms up the top layer of water and the air above it, and then remove the sunshine, allowing the top layer of water and the air above it to cool down again as they lose heat to the deeper layers of water. I’m struggling to imagine why you would expect the system to behave otherwise.

39. Gavin's Pussycat

To elaborate, the same applies for the seasonal cycle. Background, to be subtracted out before addressing natural variability.
And surely you’ve seen a weather map with a low-pressure system majestically sailing over the British isles, with temperatures underneath rapidly going up and down in their diurnal rhytm — but leaving the big pattern alone?

40. American Idiot

An analogy may help: Think of splashing the water in a tub. At any one place the depth of the water will fluctuate a lot. But the average for the tub as a whole will stay the same, since the total amount of water hasn’t changed.

Another explanation: Weather systems tend to cause patterns where it’s warm one place and cool somewhere else. For example, my part of the world had a near-record warm winter this year, while at the same time parts of Europe had killer cold spells. The result of these transient systems is a fairly low temporal autocorrelation. But for the Earth as a whole it all balances out, because temperature is controlled by the global balance between incoming and outgoing radiation — there’s no “someplace else” for heat to be transported to or from. Thus the autocorrelation is much higher.