A Clue for Willis

We already called out Willis Eschenbach for his impudently wrong claim that last summer in Australia was “nothing at all unusual.” Evidently he wants you to believe that the hottest summer on record, the hottest month on record, the hottest day on record, and the longest national scale heatwave on record, all add up to “nothing at all unusual.” Not too bright.


Perhaps he felt stung by having his foolishness exposed so plainly. He should. But instead of admitting how wrong he was, or just keeping his mouth shut and laying low until the “heat” is off, he decided to try to smear the Bureau of Meteorology’s ACORN temperature data. How? By pointing out that out of about four million days’ of station data, on 917 of those days “the minimum temperature for the day was HIGHER than the maximum temperature for the day … oooogh. Not pretty, no.” Can you feel the glee of his “gotcha!” moment? Can you smell how sure, how absolutely certain, he is that this means the data are screwed up and the folks who maintain it aren’t doing a righteous job? As Willis says,


The issue is that the authors and curators of the dataset have abdicated their responsibilities. They have had a year to fix this most simple of all the possible problems, and near as I can tell, they’ve done nothing about it. They’re not paying attention, so we don’t know whether their data is valid or not. Bad Australians, no Vegemite for them …

I must confess … this kind of shabby, “phone it in” climate science is getting kinda old …

What’s really shabby — and is way beyond “old” — is Willis Eschenbach’s eagerness to criticize what he doesn’t understand.

Here’s a clue for you, Willis: if, out of four million days’ data, there were none in which the minimum temperature for the day was higher than the maximum temperature for the day, then I would know that the “authors and curators” weren’t doing a righteous job. Gosh, Willis, you might even have figured this out for yourself if you were sincerely interested in understanding the data, rather than motivated solely by the desire to discredit it in hopes of distracting attention from your impudent blunder.

Alas, Willis, I strongly suspect that simply giving you a clue won’t be sufficient. I think we’ll have to explain it to you.

The Bureau of Meteorology (BOM) explains their methodology here:


Air temperature is measured in a shaded enclosure (most often a Stevenson Screen) at a height of approximately 1.2 m above the ground. Maximum and minimum temperatures for the previous 24 hours are nominally recorded at 9 am local clock time. Minimum temperature is recorded against the day of observation, and the maximum temperature against the previous day.

So: every morning at 9AM they record the minimum temperature for that day and the maximum temperature for the previous day. They’ll record the maximum temperature for that day at 9AM the next morning. Truly, undeniably, the day’s maximum temperature cannot be lower than the day’s minimum temperature. Can’t happen. Not possible.

But that doesn’t mean that the day’s maximum temperature measurement can’t be lower than the day’s minimum temperature measurement. Every measurement deviates from the true value, and when you read a thermometer and estimate its reading to the nearest 0.1 deg.C, your estimate won’t be perfect. Instead the measured value x for a day’s high temperature will be equal to the true value X plus some random fluctuation

x = X + \varepsilon_x .

Likewise the measured value y for a day’s low temperature will be equal to the true value Y plus some random fluctuation

y = Y + \varepsilon_y .

When we compute the measured difference x-y, it will be the difference between the true high and low temperatures, plus the difference between the random fluctuations

x-y = X-Y + \varepsilon_x - \varepsilon_y ,

or to put it another way,

d = \Delta + \varepsilon ,

where \Delta is the true difference, d is the measured difference, and \varepsilon is the random fluctuation in the difference.

The true difference can’t be negative. Just can’t be. But the random fluctuation sure can. In fact, it’s inevitable that a lot of the random fluctuations will be negative. If none of them are, then we know there’s a problem with the data.

But the true difference sure can be zero. Recall that the daily low is the minimum temperature from 9AM yesterday to 9AM today, while the daily high is the maximum temperature from 9AM today to 9AM tomorrow. You bet that the true daily high can be no higher than (although it can’t be lower than) the daily low — the true difference can be zero. In fact, it happens surprisingly often.

If the true difference is zero (which happens often), and the random fluctuation in the difference measurement is negative (which happens about half the time), then the measured difference will be negative. It not only can happen, it must happen — or somebody has messed up the data. You can also get a negative measured difference when the true difference is positive but quite small, and the measurement fluctuation is both negative and larger than the true difference. It not only can happen, it must happen — or somebody has messed up the data.

When that happens, you do not get to remove the data, or “fix” the data. Removing all the negative-difference data would introduce a bias into the temperature time series. It would also introduce a plain old mistake into the distribution of difference estimates — as though on zero-true-difference days the measurement fluctuation can’t be negative, which is absolute nonsense.

If we put Willis Eschenbach in charge of the data, he might decide to go Willy-nilly removing all those data values which don’t fit his ignorant expectation. Then, the data really would be screwed up by incompetence.

Of course, we wouldn’t expect the measured differences to be very negative, or for them to crop up very often unless it was routine for the true daily difference to be very small. I surveyed the ACORN data which recorded both a daily high and a daily low temperature, to find those days for which the measured high temperature was lower than the measured low. I found 954 such days out of 3,404,808. That’s not a lot (only 0.03%).

More to the point, the vast majority of the negative differences were very small. Here’s the count for each difference value:

count

The most frequent negative difference is 0.1 deg.C, the smallest possible for data recorded to the nearest 0.1 deg.C, which is what we would expect. Out of 954 days with negative difference, 182 (19%) were only 0.1 deg.C.

We can get a clue about the distribution of negative differences by plotting the counts on a logarithmic scale:

count_log

There’s a nearly linear relationship between the difference and the count of days which exhibit that difference. This is hardly unexpected, given the behavior of random fluctuations.

There is a very small number of large negative differences. The largest of all is 4.8 deg.C, which is not impossible given random fluctuations (measurement error), but I suspect is more likely simply a mistake. In fact, I would estimate that there may be somewhere in the neighborhood of 20 days for which the high temperature estimate is far enough below the low temperature estimate that it may indicate the existence of mistaken data values.

That’s about 20 out of 3,404,808. That’s 0.0006%. That’s a pretty damn low error rate.

That’s because the folks at the BOM have done such an excellent job in constructing the ACORN temperature data. They worked very hard at it, and unlike Willis Eschenbach, they know what they’re doing.

But since his real purpose is to distract attention from his collossal blunder, I would be remiss if I let him get away with it. Therefore I remind you all of the origin of this brouhaha, which is not the quality of the ACORN data, it’s the fact that Willis Eschenbach and most of the contributors to the WUWT blog are so deep in denial, they will actually claim that Australia’s most recent summer was “nothing at all unusual.”

That’s the kind of ridiculous claim they have to resort to, to avoid the truly disturbing truth: climate change has already made such events so much more likely that Australians had better get used to suffering through them, not just once in a thousand years or once in a lifetime, but often enough to threaten their peace and prosperity. The era of the “angry summer” is here to stay.

Worst of all, if climate change continues — and it will — then angry summers will become even more common.

39 responses to “A Clue for Willis

  1. Unless I’m not understanding the method (always a possibility) I don’t see the need for random error (though it helps), just for a sharp downward gradient. Example (based on my understanding):

    Temperature is varying linearly at a rate of -0.02F/minute over a 48 hour period. It starts at 100F at 9 AM of day 1, is 71.2F at 9 AM of day 2, and is 42.4F at 9 AM of day 3. Measurements are taken each minute.

    The maximum from 9:01 AM on day 1 to 9 AM on day 2 is 99.98F and the minimum is 71.2F. The maximum for the same period of day 2 to day 3 is 71.18F and the minimum is 42.4F. Therefore, the data for day 2 will be entered as:
    Day 2: max. – 71.18F (from day 2-3 period), min. – 71.2F (from day 1-2 period)

    A bigger negative gradient would give a bigger difference.

    • After thinking about my post below, I think I can answer this.

      Assuming that the temperature and time are continuous, then any discontinuity in the recorded temperature would be a measurement error. Here you have that the lowest temperature in 24 hours before 9am is greater than the highest in the 24 after that. Since there’s no jump in actual time or temperature at 9am (if you quantize time into minutes you still introduce measurement error that fails to measure the continuity of the temperature), a jump in the measurements is an error.

      A clue is in the method description: “nominally recorded at 9 am”. This suggests some deviation from exactly 9am. If you have a warm night followed by a colder day, with the temperature roughly falling around 9am, and you either measure the temperature at large intervals, OR even if you measure it continuously but have say one 24-hour period end just before 9, and the next start just after 9, then you can have a gap in time around 9am where the temperature can fall significantly.

  2. I don’t understand your explanation.

    It seems as if Willis would naively expect that min temperature for a day would be recorded over the actual day from midnight to midnight. I expect this would not be so good because some of the lows would happen before midnight, so you could have one data point for one day right next to the data point for the next day, so two data points on one night and then have some nights where no minimum is recorded. A 9am cutoff means that you’re fairly sure to capture the low of the previous night, as well as the high of the upcoming day. The lows of the previous night would more often happen after midnight, thus appropriately being assigned to “today” at 9am, while the highs will happen before midnight, thus appropriately assigning to yesterday’s high.

    So what I don’t get is why it’s impossible to have a min from the previous night that’s higher than the high for the upcoming day. It should be rare, but if you have a warm night followed by a cool day, it should happen. I don’t think all the occurrences are due to measurement error.

    • Okay… I think I figured out the answer to my question. In order for it to happen without measurement error, you’d have to have a downward jump in temperature at 9am, and have the temperature for the next 24 hours lower than at any point in the previous 24 hours. If the temperature is measured continuously, then the temperature at 9am would have to be greater than or equal to the low of the previous 24 hours, and less than or equal to the high of the next 24 hours.

      I’m dum.

  3. David Appell

    I caught Willis in another bad and blatant error recently, where he tried to claim that the ocean isn’t statistically warming:
    http://davidappell.blogspot.com/2013/06/wuwt-ocean-misunderstanding-and.html

    Basically he was calculating the 2nd derivative of the ocean heat content with time, not the first. I provided a trivial example (a linearly warming ocean) that showed his methodology gives a rate of “0.” He pretended he didn’t understand and never did correct himself.

    People like WIllis ensure that denialism will never cease. Never, even when the world is 2 C or 4 C or 6 C warmer. They will always find a way to misrepresent the data and twist the science into giving them any answer they want. There will never be an end to it.

  4. If I understand correctly then the true difference could be negative. If the daily low recorded for a given date is the minimum temperature from 9AM yesterday to 9AM today, and the daily high is the maximum temperature from 9AM today to 9AM tomorrow, then you just need one day to be cooler then the preceding day by more than the diurnal variation of the previous day, and this methodology will give you a maximum cooler than the minimum. For example, if day one had an actual min/max of 20/30C, and day two had an actual min/max of 10/15C, then this methodology would assign the minimum of 20C and the maximum of 15C to the same date.

    [Response: If measurements are made with perfect precision and accuracy over continuous time (which is of course impossible), then the day’s max can’t be lower than the day’s min even by the BOM method.

    Let “x” be the temperature at 9AM precisely, with infinite precision and perfect accuracy. Then the day’s min (from 9AM yesterday to 9AM today) can’t be above x, while today’s max (from 9AM today to 9AM tomorrow) can’t be below x.]

    • I agree with RW. I also do not understand the need for error explanation.
      In day 1, the maximun temperature is 30ºC ay 4 PM (registered at day 2). In day 2, the minimun temperature is 20, at 8.59 AM (registered at day 2) . A cold front is entering.
      Maximun temperature of day 2 (registered after 9 AM, in day 3) is 19ºC, at 9.01 AM.
      The temperature continues to drop throughout the day and does not exceed 19º until 9AM at day 3.
      Probably the negative difference ocurrs in more than one meteorological station in the same day.

    • My reading of the definition is:

      Max from 9 am previous day < time <= 9 am current day = max temp for previous day

      And

      Min from 9 am previous day < time <= 9 am current day = min temp for current day.

      As such, assuming hourly observations, you could have a cold system move through where the temperature falls all day from early morning to the evening (I've seen that happen here in the Canadian prairies many times).

      Only if you expect temperature observations to be continuous could you call this an "error". I've always interpreted it as a quirk of meteorological definitions.

    • Ah yes, I forgot about that boundary condition. Makes sense to me now mathematically that the difference cannot be less than zero. But it doesn’t make sense to me physically that it can be zero. Doesn’t that require that the value x measured at 9am is simultaneously equal to the lowest temperature measured during the previous 24 hours and the highest measured in the succeeding 24 hours? Not sure what I’m missing here, very grateful if someone can explain this to me.

      [Response: Yes it does require that the 9AM value is simultaneously the lowest of the preceding 24hr and highest of the succeeding 24hr. Indeed that doesn’t happen very often — but it does happen.]

      • Such a situation seems hard to imagine, requiring such a rapid change in the weather from one day to the next that the diurnal cycle is completely supressed. But if 0.03% of the differences are negative then it would seem reasonable to estimate that about 0.06% of the true differences were zero. That would mean, I think, that roughly every 4.5 years at a given station you’d expect this kind of dramatic cold change, if the occurrences were evenly distributed among all stations. Doesn’t seem outrageously unlikely to me when I think of it like that.

      • We have family visiting Taipei right now, and the weather report gives several days with no variation, so such min>max errors are possible then.

  5. arch stanton

    I see commenter “Johanna” bothers to do enough research to uncover the explanation (that Willis didn’t bother with). Unfortunately she then plays Twister with logic to come to a conclusion that is palatable with Watt’s fans.

    Farther down we see Watts’ detailed reply to Tamino’s previous post “REPLY: there’s no point in paying attention to Grant Foster aka “Tamino” his rants are irrelevant – Anthony” (To Gonzo).

    Obviously Watts knows his audience, a summary dismissal from a trusted authority (appeal to authority, not even an argument) is all the average Watts reader needs to verify credibility. The Irony obviously escapes them.

  6. I think the min>max problem is one of definition. They measure 9-9, but then quote for a calendar day. They have to decide which side of midnight each falls, and the convention is min today, max yesterday. Yesterday’s measured min could exceed the assigned max.

    [Response: I don’t agree. If one uses a max/min thermometer, then the min from one 24hr period is a lower bound for the max of the next 24hr period.

    This much is certain: fluctuation of measured values is ubiquitous, they are not always equal to physical values. Even when the thermometer gives the correct temperature, just reading the scale to the nearest 0.1 deg.C is a challenge (as those of us brought up on slide rules can testify).]

    There is probably a little more to it. Some stations, eg Cabramurra, Cape Otway, Wilson’s Promontory had many discrepancies. I suspect they were managed by non-BoM public authorities (SMA, lighthouses) and kept a different schedule, which may mess the conversion.

    [Response: Or perhaps those locations are more susceptible to near-zero true differences, and therefore to below-zero measured differences.

    It’s interesting how much effort readers here are willing to invest in order to understand what is *actually* going on. Willis Eschenbach and most WUWT readers, so it seems, would rather remain clueless.]

  7. Traditionally, been recorded continuously. A recording max/min instrument similar to “Six’s Thermometer” has been the norm.

    http://en.m.wikipedia.org/wiki/Six's_thermometer

    Such instruments are manually reset at the end of observation, which implies a short but definitely non-zero interval between observation periods. So, theoretically, I would expect that a daily maximum could actually be lower that the minimum, since they are from different non-inclusive periods.

    Day 1: Min 20 C, from 8:59–still cooling as cool front moves in
    Day 2: Max 19.9 C, from 9:00–continues cooling for a few hours; temps the stabilize and hold until evening.

    Both max & min would then fall within the same recorded day, no? The circumstance would surely be unusual; you’d need a negative trend around 9 AM, and one for which that negative trend was greater than diurnal variation during day 2. But with 4 million days…. (Actually, I think I’ve lived through a couple of days more or less like that–though never in Australia, unfortunately.)

    If I’m reading things correctly, that was the scenario envisaged by a couple of commenters above, but this thing is darn confusing, making unambiguous explanations challenging to construct.

    [Response: One could also state that the *true* 9AM-to-9AM min was 19.9 C, and treat the 20 C thermometer reading at 8:59AM as a measurement fluctuation from the true value. The result is the same: the measured value deviates from the true value … as it always can.]

    • Rattus Norvegicus

      I’ve just spent more than a few minutes thinking about this, having initially approached this with the “Tamino’s wrong” viewpoint. After making my head hurt I have come to the conclusion that Tamino is right. However, there is one caveat here: because the BoM says that when a reading can’t be taken the measured min and max values are assigned as an aggregate. I take this to mean that the same values are assigned for min/max for all of the days that did not have a proper observation. I haven’t thought about what could happen here and it may account for the odd outliers, but if you look at the counts it would seem that about 2/3 or the measurements in question are .5C difference or less it would seem as though the problems which Tamino highlights are dominant.

  8. Does he sell advertising space on his blog?

    I’ve got a couple of bridges I’m looking to get rid of to a credulous, I mean creditworthy buyer, and choosing where to advertise means finding a pre-selected group of likely prospects.

    • I’ve read that those could-you-hold-these-millions-for-me scam emails are deliberately ridiculous in order to screen out the incredulous. Surely there is an application here for this approach?

  9. Hmm, somehow the words “temperatures have not” got chopped out of my post above. Love my tablet, but posting lengthy comments with it has some drawbacks.

  10. I misread the title as “A Cure for Willis.”

    Naturally I clicked, as to date, there is no known cure for Willis.

  11. Michael Sweet

    Tamino,
    According the the methods section of ACORN available here when there is a different maximum and minimum thermometer at a single site they are calibrated against each other when they are reset. If they were within 1F before 1 September 1972 and 0.5C after that date, they are considered within calibration and the readings are accepted. Therefore it is possible for the reading at 9:00 to be 15.0C for the minimum and 14.5C maximum. If the temperature then went down for the rest of the day the minimum would be higher than the maximum and the reading would be considered accurate. The measurements that are within 0.5C of each other are likely mostly accurate readings with thermometers that are just not in agreement but within the specification of the measurement.

  12. On additional issue to consider is the device that is actually doing these measurements. In the past, it was likely to have been some sort of mechanical contrivance that pushed some “knob” up as temperatures rose and another “knob” down as temperature fell. These knobs are not allowed to slip back as the temperature recedes from high or low. Every 9AM the knobs are read for minimum and maximum temperatures and then are manually reset.

    Such mechanical devices are subject to various problems as they function. Maybe they do slip and settle back a little bit. In such circumstances, there might even be a systematic error that would allow the minimum to exceed the maximum when a certain temperature pattern is followed. Such a situation would not follow the “noise” error analysis above. I would also possibly explain the long tail and the 4.5C reading.

    (Come to think of it. The mercury thermometers we used to take our body temperatures when I was a lad worked more or less this way. The mercury rose when the thermometer was in your body, but had to be shaken down to “reset”.)

  13. P.S. Looking further: I see that the Six’s thermometer described in the wikipedia page linked from Kevin McKinney’s post above works more or less the way I am envisioning. In fact, it is subject to another systematic problem: the minimum and maximum are read off of different scales. If the scales are not perfectly calibrated with each other then there will be a systematic difference between the minimum and maximum temperatures – and a possible negative value when the minimum and maximum are supposed to actually be the same value.

    In fact, looking carefully at the photo in the device in the picture:

    you can see that they in fact appear *not* to be calibrated perfectly. The mercury on the left (minimum) side (on which higher temperatures are *down*) appears to read just over 24C, but on the right (maximum) side it looks like about 23.5C. Voila – a minimum bigger than the maximum.

    One way to tell if this sort of thing is going on would be to look at the *dates* of these anomalous days. If they ceased when electronic devices became widely used, then probably this sort of thing is the explanation.

  14. A climate denier named Willis
    Wrote essays that really were sillis
    We tried to correct him
    Or even eject him
    But soon the responses got billis.

    (with apologies to Ogden Nash)

  15. The pseudo-intellectual analyses of Eschenbach, Tisdale and Monckton are among the most damaging and Willard Watts does the world a disservice by providing a wide audience for their semi-informed expositions.

  16. This could also be explained by a fast-moving front crossing the location in the early morning. For example: at 4 am the temp on Day 1 bottoms out at, say 10°. It rises to 12 degrees until 8 am when the front comes through and it begins to fall.. The daily max/min is read at 9 am, and the 10° is assigned as the daily minimum for Day 1. But the front continues to drop the temperature until it reaches its *actual* Day 1 low at 11 am, of 4°. The daily high in the afternoon is 9°, which is below the *recorded* daily low of 10°.

    Thus no error of measurement, simply an artifact of the way temperatures are read and assigned.

    [Response: But that’s not the way it’s done. The 9AM temperature (12C) would be recorded as the max for that day.]

    • Uh, no. The 9 AM max is recorded as the max for the previous day. Hence the possibility.

  17. Blair Trewin

    The situation actually arises because, where adjustments are carried out to the data (e.g. because of site moves), the maxima and minima are adjusted independently. What this means that if the maxima at a site in a given year are adjusted downwards because the former site is warmer than the current one (or if the minima are adjusted upwards because the former site is cooler), and you have a day when the diurnal range in the raw data is zero or near zero, you could end up with the adjusted max being lower than the adjusted min (e.g. if the raw data have a max of 14.8 and a min of 14.6, but the mins are adjusted up by 0.4, you would end up with a max of 14.8 and a min of 15.0).

    What this reflects, in essence, is uncertainly in the adjustment process (the objective of which is to provide the best possible estimate of what temperature would have been measured at a location if the site on that day was as it was in 2013). Clearly in these cases either the estimate of the max is too low or the min is too high; however, providing the adjustment process is unbiased, these cases will be offset by cases where the max in too high/min is too low, and there is no overall bias.

    We’ve decided, though, that the internal inconsistency (which, as Tamino notes, affects only a tiny percentage of the data) looks strange to the uninitiated, so in the next version of the data set (later this year), in cases where the adjusted max < adjusted min, we'll set both the max and min equal to the mean of the two.

    • When you do so, you’ll be accused of a coverup, of course. I’m sure you know that. Best of luck, those who understand science will be with you.

      • The whole brouhaha does nicely illustrate the blend of ignorance and ill-will that goes into a lot of denialist ‘product.’

      • arch stanton

        I’m sure Watts et. al. will be all over it – coming and going. His legions will jump at the chance to inflate by orders of magnitude it’s insignificant significance.

    • Does that mean that the measurements are not subject to the errors already identified here? In particular, how were these temperatures measured in the past?

    • PennDragon

      “however, providing the adjustment process is unbiased, these cases will be offset by cases where the max in too high/min is too low, and there is no overall bias.”

      This is a big proviso and a ‘courageous’ assumption when you do not explain how this happens. To assume this error is random and not systematic and expect those of us trained in physics to accept that is not acceptable. The source of the error needs to be explained rather than using the now partly emotive term of ‘bias’ to avoid addressing the issue. Averaging the clearly wrong results just buries the problem and leaves any systematic error to continue to ‘bias’ results.

      [Response: You make it glaringly obvious that “trained in physics” does not include “knowledgeable about statistics.”

      We know that adjustments are necessary (e.g., station moves do not have zero effect), so failure to apply them is what guarantees systematic errors. And unless you have some supernatural knowledge (which you don’t), the best that can be done is to apply adjustements which are truly unbiased — a word with a very precise meaning in statistics, which is only emotive to the ignorant and/or those with an agenda.

      We wouldn’t be having this discussion if it weren’t for the ignorant agenda-driven who take the best that can be done and call it “useless” or even “fraud.”]

  18. But… Willis is my hero… Say it ain’t so, Willis. Say it ain’t so.

  19. Why wouldn’t the maximum temp and minimum be set, for any given day, the next day? It seems more authentic to take the beginning of a day as midnight or midday, this would make the days high and low more likely to be contained in that day. And when it is not this may be noteworthy data in and of itself.

    • Essentially, they do what you suggest–they ‘back-assign’ highs precisely to get the dates lined up correctly. Unless you want them to actually change the observation time to midnight? That would’ve been inconvenient, perhaps more error-prone, and involved paying observers better…

  20. Michael Sweet

    Blair Trewin is the scientist in charge of the ACORN data set. He has kindle posted here so that we will be properly informed. (I sent him an e-mail asking about this post).

  21. Aaron Lewis

    Each reading is likely 2 digits with a decimal point, offering 3 chances for a recording error. Thus, there was on the order of a thousand errors in about 10 million chances for errors. That is excellent data quality in the context of hand entered data. We owe Blair Trewin (and all of his team) a debt of thanks for running such a fine data collection program.

    The complex data definitions and site moves make this an even greater accomplishment.

    The data is “good enough” to tell us that they are getting some weird and wild weather. And, the effect is large enough that any fifth grader should know enough statistics to see that the effect is real.

    Willi has a point, just not the point he intended. Considering the 2010 Russian summer, the 2010 Indus floods, the SW US Drought, loss of Arctic sea ice, and the increasingly warm weather in Australia (and Antarctica) over the last decades, then Jan 2013 in Australia is what must be expected in a time of AGW. Right now we are having a little heat wave in the SW US, and have set some record max temperatures in the last few days. And, while we got the warmth, the folks on the other side of the Mississippi were setting some records for max precipitation. Extreme weather (of every kind) is not at all unusual in a time of AGW.

    We have to understand that the climate system has tipped, and what would have been a 6 SD event by past standards is now just a marker on our way to weirder and wilder weather. The weather in the new climate system cannot be imagined from the context of the old climate system.

  22. Tamino: Would you every consider taking any temperature series you wish and passing it to a digital bandpass splitter/cascaded low pass filter circuit and plotting the whole output from each stage (as low and splitter)?

    This should allow the discovery of where in the available spectrum (record length limited of course) the RMS power is.

    I would suggest that you started with a cascaded running average filter bank with the starting pole (average span) at say 12 months and use the well known inter-stage multiplier of 1.3371 to cancel any ‘square wave’ digital sampling errors that otherwise occur.

    I have done it for various sources and would like your observations.