Open Thread

There seems to be some desire to discuss things not topical for other threads.


236 responses to “Open Thread

  1. Seems Scafetta threw out another mathturbation paper:
    Gavin Schmidt already had a say:
    “More nonsense. Mis-application of spectral calculations, post-hoc justifications for ridiculous physical mechanisms, huge overstatements of their importance: JASTP takes another step towards astrology. – gavin”


  2. Igor Samoylenko

    Unusually for the true “sceptics” though, Scafetta offers a verifiable prediction in the abstract: “Moreover, the climate may stay approximately stable during the next decades because the 60-year cycle has entered in its cooling phase.”

    And then the pigs will fly…

  3. Great timing.

    A comment about Yamal sent me back reading as much as I could . I figured Schweingruber was pretty basic. Briffa/ Osborne assured me that is so. They used the 300+ Schweingruber series for years and Mann et al included the data in their series of paleo papers.

    McIntyre obviously knew this ,so his claim of finding a few trees just lying around the net has the appeal of a free Brtiex helicopter ride.

    McIntyre used a few trees out of 300+ tree ring series. How is that not a cherry pick? McIntyre truncated a Yamal series removing data of which he did not approve. How is that not a cherry pick? McIntyre then tacked one series on another. Is he on record approving of such methods?

    Bfiffa, Osborne and Mann seem to have used all the data, had permission and published their results. Their behavior seems pretty ethical compared to the auditors.

    John McManus

    • What McIntyre did was worse than that. He got rid of the proxies he didn’t like and substituted ones he did. Briffa took him to task for this and showed that a more reasonable choice of proxies (basically all of them) produced a regional recon in line with what he had previously reported rather than what McIntyre did. The comments about what McIntyre did in that little escapade in one of the investigations (Muir-Russel?) showed what he thought of McI. Let’s just say that he doesn’t hold him in high regard.

      • Nobody who knows anything about tree ring based climate reconstruction holds McIntyre in any regard whatsoever, other than the mendacious and myopic jackass that he is.

    • slightly amazed that no one has told you to stop embarrassing the folks here for a gross errror

  4. Thanks, Tamino–

    Since it really fits here, I’ll re-post the link, with your indulgence:

    (A short story addressing climate change in fictional form–suitably post-Modern and all that. . .)

  5. Silly me. My comment got dissapeared once so I was in too much of a hurry and forgot.

    Mann et al already used Schweingruber’s data correctly and the Yamal data correctly. McIntyre seems to advise changing the data to contrive an incorrect conclusion. Again; is this ethical?

    John McManus

  6. But Dr. Schmidt, Pielke Senior has endorsed Scafetta’s “paper”. It must be good right? ;) I’m sure too that it will be another “Wow!” paper for Curry. Whatever happened to Salby’s supposed earth shattering paper?

    This commentator at RC nails it:
    “I’m wondering if Roger actually reads the stuff that he cites these days, or just sees stuff that “enlarges the debate” (i.e. differs from the mainstream, no matter how crazy) and instantly posts it to his blog. His fawning over a crappy blog post by Bob Tisdale is further evidence of his descent into irrelevancy”

    That is what appears to be happening on Pielke Senior’s blog, a blog that does not permit comments/questioning/dissenting views.

    • Seems Pielke Sr. is now using the Curry approach. Seems he doesn’t know that he’ll need a commenting system to get the cheerleaders dancing.

    • Philippe Chantreau

      I think Pielke Sr. is to be considered a casualty, just like Curry. He’s on record at SkS citing Watts as “commited to the highest level of scientific robustness.” Only that tells you how disconnected from reality he has become. Then he went on and on about how warming has slowed and could never give a straight answer to questions about statistical significance and power. Not the stuff that any kind of self respecting scientist would ever consider saying.

  7. We have an economic “crisis” but a climate disaster.

    The tsunami of climate change will dwarf the ripples in the economy.

    The big fear in the economic crisis is unemployment.

    We stand a (possibly small) chance of averting the tsunami by cutting carbon emissions with a carbon tax. So lets


    Full employment and a small hope the tsunami will stop.

  8. W Scott Lincoln


    Saw an interesting post on the WeatherUnderground tropical blog that seems related to recent discusions on “cycles” and “trends” and the like. A somewhat prominent poster at the site trying to blanket-denounce any study that links increasing polar cycle frequency/intensity to global climate change. The reasoning is that there is an “obvious cycle” to both the North Pacific Index and the Arctic Oscillation, and until we have several troughs/peaks of those cycles, we cannot call anything a trend. Several people have significant respect for this young scientist’s opinions on tropical weather and I fear that many will believe it at face value. I am not quite as good at statistics to really give it the rebuttal it deserves, although I attempted to ask some vital questions he was leaving out. I would really be interested in your take:

    “Regarding the Alaska storm and its ties to global warming, there is a problem with citing studies that found increases in the frequency of strong extratropical cyclones in the north Pacific during the period from the mid-1950s to the early-mid 1990s. The problem is that during this period, the North Pacific Index (area-averaged MSLP from 30-65N, 160E-140W) was decreasing steadily during this time in a clear multi-decadal cycle, from a peak in the mid-1950s (high pressure) to a trough in the 1980s and 1990s (low pressure). This index has been coming back up the other way in recent years, consistent with the multi-decadal cycle that has gone through nearly two full periods since 1900 now. Going from positive to negative between the 1950s and 1990s naturally increases the frequency of lower pressure cyclones.
    In addition, the arctic oscillation (AO) was moving into a positive multidecadal phase between the 1950s and the 1990s, where it peaked. This also naturally increases the severity and frequency of strong extratropical cyclones in both the Pacific and the Atlantic. A small trend upward in frequency of strong storms was found for the Atlantic as well, which can be easily related to a strong positive surge in the NAO during its multi-decadal cycle between the 1950s and 1990s, and the positive NAO has been shown by some studies to increase the strength of North Atlantic extratropical cyclones. The point here is that you cannot take a point at the trough of a cycle, and a point at the peak, draw a line between them, and then call that slope a trend. You must measure a trend through at least one full cycle, if not several complete cycles to really call it a trend.” comment 185

  9. Michael Hauber

    If you are not good enough at statistics to give the argument the rebuttal it deserves, then how do you know the argument is wrong?

    I suspect that these indexes do not have what I would call a genuine multi-decade cycle to them, but the point still remains that there is some clear multi-decadal changes in these indexes that goes first one way and then the other.

    As these indexes would appear to be reasonably linked with storm frequency in the far North Pacific, I think he has a good point that there is a ‘problem’ with citing a trend from 1950 to 2000 as evidence of a global warming impact.

    However that does not mean there is no global warming impact, and other lines of reasoning may be capable of drawing a link between global warming and intensified North Pacific Storms. If modelling predicts a particular trend, and the history matches that trend, then that is significant regardless of whether there is a multi-decadal cycle, or vaguely oscillatory variation.

    • W Scott Lincoln

      Because there are some here that understand these types of statistical issues BETTER than me. That was the point I was trying to get at. But even not knowing a concept as well, I can have a gut feeling that something is amiss in an argument. As a scientist it is important to know, however, when other people will better analyze and explain something.

      The point I was trying to make is that the discovery of “obvious cycles” is a dime a dozen. Everyone is finding them, they have all sorts of periods/amplitudes depending on the data set used. The hidden claim in that suggestion is that if there is somehow a cycle with a consistent period, it is caused by something natural, and if it is a natural cycle, it can’t be affected by global climate change. But these “cycles” generally don’t mean much unless there is a physical mechanism behind it.

      There may be a correlation found between some average pressure index in the Pacific and polar cycles. There may be a correlation between the AO/NAO and those same cyclones. But does that mean that they are responsible for all changes? Does that mean that changes in those indeces are natural? No.

  10. A fresh thread a fresh start:

    It is fair to say that as each year has been above average the new average has increased.

    The use of Average Annual Anomaly to describe Global temperature gives no indication of this.

    Is this a result of using the wrong scale or does it suggest a more fundamental issue?

    In discussion of Anomaly, firstly I would like to recant my agreement that Anomaly is analoguous to Displacement as Displacement is shortest distance between start and finish, this is analoguous to Anomalies, except Displacement has no time factor while time is a factor in Anomalies.

    Anomalies describe the sum change of the Anomaly in question over a certain period of time, I don’t see how time cannot be said to be a factor in Anomalies, Anomalies has to have a defined time factor to be relevant, otherwise one could be talking about billions of years or seconds.

    Anomaly can be seen as the distance between an object and it’s previous average location over a certain period of time, while Anomaly is a instantaneous measurement, it’s frame of reference, the Baseline, has a time factor.

    Anomaly could thus be seen as a mirror image of velocity, the object is stationary and the World, the frame of reference moved relative to it over the time period and the average of this movement has been defined by the distance to the baseline.

    So if Anomaly could be seen as analoguous to Velocity then Anomalies could be seen to be rate of change of Velocity.

    • Criminogenic, Baseline !=average.

      • I wonder how the Baseline’s population of Average Temperatures is determined

        Arbitrarily, because it’s meaningless when used to calculate trends.

        As I mentioned far upstream, one could choose 0K as the baseline, and nothing would change.

      • “Let me undo the gag of my harsh mistress and say a Temperature anomaly is the difference between the average of average Global temperatures over a period known as the baseline and average Global temperature at a specific moment in time.”

        There, now, my pretty, that wasn’t so hard. . . was it?

      • Oops (to quote Rick Perry), that was misthreaded–should have gone here.. In my defence, it’s a *long* freakin’ thread.

    • Adding to Ray’s comment, if one wants, one can choose 0K as their baseline … which obviously contradicts your claim that “while Anomaly is a instantaneous measurement, it’s frame of reference, the Baseline, has a time factor.”

      And it’s “its”, too …

    • Chris Ho-Stuart

      You say “Anomaly can be seen as the distance between an object and it’s previous average location over a certain period of time, while Anomaly is a instantaneous measurement, it’s frame of reference, the Baseline, has a time factor.”

      That is wrong. The baseline is fixed. Every anomaly in a time series is with reference to the same baseline, NOT with reference to some average changing over time.

      You can see this by the fact that the conversion between a plot of anomalies and a plot of temperatures is simply to rename the vertical axis to be measured.

      The anomaly is not in any sense at all analogous to velocities.

    • Any chance you could address (here) my reply in the other thread? I think it would help in understanding the obvious difference between your understanding and that of most of us.


      • Sorry, I wasn’t clear (indenting and all that). I was addressing crim.

      • True,

        “You seem to think that even a constant anomaly would indicate a steady increase.”

        To clarify I’m talking about the Average Annual Global Temperature over the period defined by the end of the baseline and ended by the most recent year increasing, not the Average Annual Global Temperature Anomaly increasing.

        I think a constant Anomaly results in a steady increase in the new Average Global Annual Temperature.

      • Chris Ho-Stuart

        It doesn’t. A constant anomaly means a constant temperature.

        Anomaly is not a velocity. It is not a rate of change. It is a temperature expressed as the difference from a fixed baseline value.

      • criminogenic,
        NO. The average is computed for the baseline period–and henceforth does not change–as such a constant anomaly would be a stationary temperature.

      • I think we’ve done about all we can to educate criminogenic on what is meant by a baseline. If he can’t understand that a number like (say) 0K is a constant then I don’t see much hope for learning to take place here.

        I almost have to admire his stubborn refusal to learn, though!

      • Please indulge me in another analogy.

        For 10 years my income was $100000 a year, I wish, thus my average annual yearly income was $100000, the baseline period.

        Following years I received $110000, $110000, $110000, $110000.

        My average yearly income over the period which is the baseline plus each successive year increases from $100000 to $100909 and so on until the last years Average Annual income is $102857.

        Another sequence is the same baseline and then $104000, $103000, $102000, $10100, where the anomaly is decreasing but the Annual average income over the entire period is increasing.

      • Criminogenic,

      • Crim,

        I was about to make a similar comment. Yes, for a constant positive anomaly, the average for the entire period (including the baseline) will continue to rise, but of course can only get closer and closer to the value of the anomaly and cannot rise above it. Taking my example from the other thread of a constant 0.5 anomaly, the average will rise year by year and converge on 0.5. This is just simple arithmetic, though, and has nothing to do with “velocity” or “acceleration”.

      • Crim, you’re confusing moving average with anomaly. They’re not the same thing at all.

      • True,

        I agree with you.

        My velocity analogy is secondary in my mind to the issue of the increasing trailing annual Global average.

        The fact the trailing average is increasing is enough for the General Public to know, more in-depth descriptions end up being distractions as seen in crucifixion of Phil Jones.

        If Phil Jones had said the trailing average is increasing and nothing else and referred Journalists to a paper in the manner of a Poltician then the meme of Global warming stopping and the concept it could stop without CO2 reduction would have less credence.

      • Chris Ho-Stuart

        Frankly, I think if Phil Jones had spoken of “trailing averages”, then it would have been picked up and laughed at as unresponsive and misleading. Phil Jones is a better scientist than that.

      • I think that ‘trailing’ is superfluous in describing average annual Global temperature, of course it’s trailing, it’s not a projected Average.

        I don’t see how using Average is “unresponsive and misleading”, Average is the mechanism via which Global Annual Temperature is described and how Global warming is recognised to be occuring.

        The use of Anomaly can be shown to be unresponsive and misleading as that has been it’s influence on response to the extinction level event.

        [Response: I think you’re extremely confused about the meaning and relevance of temperature anomaly. I suggest you take a step back, acknowledge that you need to comprehend the basic definition, and simply learn what the word means.]

      • Well, finally, losing patience and looking it up …

        “crim·i·no·gen·ic   [krim-uh-nuh-jen-ik] Show IPA
        producing or tending to produce crime or criminals: a criminogenic environment.”

      • I suggest you take a step back, acknowledge that you need to comprehend the basic definition, and simply learn what the word means.

        Ahh! You meant Crimo and “anomaly”, not the rest of us and “criminogenic “! …

        (just kidding, but my looking up “criminogenic” might provide a good example for criminogenic, who apparently can’t dig deep enough into the dictionary to uncover an “a” word …)

      • I acknowledge I need to have a clear understanding of what a Anomaly is, it is the deviation from the mean. Temperature anomaly is the deviation from the mean defined by a baseline that varies depending on which organisation is measuring the Anomaly.

        The Annual average is defined by the sum of Anomaly over a year divided by the number of Anomaly measured over the year added to the Baseline temperature.

      • I acknowledge I need to have a clear understanding of what a Anomaly is, it is the deviation from the mean.

        Sigh … no.

      • Anomaly doesn’t just apply to temperature, the basic Dictionary description of Anomaly is deviation from the mean, I interpret this to mean a sample identified by it’s difference from the population’s common characteristics.

        I would like to see somebody comprehensively destroy my interpretation of Anomaly because I think it negates a Anomaly from being measured outside the period of the Baseline, as anything outside the Baseline’s period is a different population.

        [Response: The word “anomaly” in the context of climate science does not match your interpretation based on the “dictionary description.” This is just one of thousands of examples of the use of specialized terminology within a particular branch of science. You can’t learn the meaning of scientific terminology by slavish adherence to a dictionary definition — rather, you learn the science and in the process you learn the terminology as defined by the researchers who select it.

        In order to learn the true meaning of the word in this context, you must first accept the fact that you’re wrong. Period.]

        I wonder how the Baseline’s population of Average Temperatures is determined, obviously they can’t be determined by Anomaly from another baseline, they have to be determined from a direct measurement of temperature, why not just use this methodology to describe Global Temperature? in the tradition of KISS.

        Interestingly Wikipedia sees fit to have no Temperature Anomaly description, of course Wiki is no Britannica.

        The name Criminogenic was inspired by Murdoch getting off scott free and the resultant Criminogenic environment feeding the London riots.

      • Criminogenic,
        Dude, wouldn’t it be simpler to just listen to what the experts say, especially when it comes to simple matters of definition on terms they use on a daily basis?

      • Anomaly doesn’t just apply to temperature, the basic Dictionary description of Anomaly is deviation from the mean

        I just browsed, and no, that definition isn’t there.

        You are very stubborn. And wrong, simply wrong.

      • First time it’s been suggested I have a lexiconic Dominatrix, certainly would make for some baroque command sessions.

        Let me undo the gag of my harsh mistress and say a Temperature anomaly is the difference between the average of average Global temperatures over a period known as the baseline and average Global temperature at a specific moment in time.

      • Chris Ho-Stuart


        Note especially that the baseline does not change. The anomaly is the difference from a fixed reference baseline; not from the recent or running average.

        Hence a constant anomaly means a constant temperature.

      • Chris Ho-Stuart

        Actually, I wrote too fast. Criminogenic, that’s almost right; but not quite.

        Anomalies are defined first of all for each location, and THEN averaged. So it’s not really the difference between global temperature and an baseline average global temperature.

        You FIRST get all the average local temperatures (averaged over the chosen baseline) at each weather station. Then you convert the local temperature time series at that station into a local anomaly, for that station. This gives you thousands of local anomaly time series all over the globe.

        THEN you get a global average anomaly of all the local anomalies. You never calculate an average global temperature in this process.

        However, the major issue leading you to think of an anomaly as a velocity is now corrected, since you are now recognizing that the baseline is a fixed period of time, not a recent period of time that varies along the time series.

        Sorry if the double answer just makes things more confusing.

      • Chris Ho-Stuart

        Well now, then I confused.

        Why does BEST mention 7.11 degrees C, as in;

        “Estimated 1950-1980 absolute temperature: 7.11 +/- 0.50″

        See also NCDC;

        Where “Global Mean Monthly Surface Temperature Estimates for the Base Period 1901 to 2000” are shown as absolute temperature monthly anomalies (e. g. the annual Land Surface Mean Temp. is 8.5 degrees C).

        I find it rather obvious that if you go about calculating all the anomalies first for each station (using any base period you choose or desire), than when calculating regional averages, you lose all sense of what those original temperatures were.

        So for example, if you had an Antarctica station (say with a mean of -50 degrees C), with a Saudi Arabia station (say with a mean of +35 degrees C), calculate their respective anomalies (daily, monthly, or annual) for your base period, subtract the calculated mean anomalies for your base period from the true temperatures, you end up with two trend lines, both with zero mean for the base period.

        So does that imply, that for the base period, the mean of those two anomaly time series, is the mean from those two stations (which would be zero, per the self definition of mean anomaly for the base period being equal to zero)? The obvious answer is a very explicit … NO!

        Now anomaly time series are only good for trend line estimates and/or the deviations from the mean (which is always zero for the baseline as self defined), you still need to carry, or do, the same weighting for the respective underlying absolute mean anomaly time series from all stations (with proper area weights being the same for both, of course), to determine the mean absolute anomaly (daily, monthly, or annual).

        Without the mean value, we have absolutely no idea of what the base period mean temperature is in the first place (it could be -10 degrees C, or it could be 10 degrees C, or it could be 110 degrees C).

      • Chris,

        “Hence a constant anomaly means a constant temperature.”

        I have never disputed this, what I have been saying, which True agrees with, is that the ‘trailing’ Average increases over the period of constant Anomalies, curving concavely until it reaches the Anomalies line if the Anomalies do not increase. The current curve has roughly 50% left to go before it meets the Anomalies line thus it’s flattening would need another 15-20 years, by which time we would have good reason to start looking for more exotic causes of the Global temp. rise of 1970-2000.

        The reason I bring this point up is that if we used ‘trailing’ Average as the expression for Global Temperature a more truthful analysis of Climate is created.

        A year is far too short to describe Climate and it has no particular relevance, it is just an arbitrary discrimination of time. An average over the whole period of accurate recording is a far more useful and relevant reflection of Climate.

        Perhaps the most glaring impact of the problematic nature of Anomaly is the world wide meme that Global warming has stopped and that it can stop without CO2 reduction.

        If ‘trailing’ Average was the accepted descriptor for Global temperature both these memes would not have been ‘the straw that broke the Camels back’ for Global CO2 reduction efforts. The simplicity of average makes it a effective tool for Billions of people to understand what is going on. But I think the horse has bolted now, so my whole point is superfluous.

        I have been debating Climate full time for about 6 years now. My postion is that the Horse has bolted and the only solution is a habitat in the Stratosphere built with lighter than air materials a la’ ‘the Diamond age’.

        As part of my response to this scenario I am currently conceptualising a nano-filter to remove atmospheric CO2 to enter into Branson’s 25 million dollar competition, if no winner is announced. Not that I think there is any point to removing CO2 as the deal is already done, as seen in the disappearance of the Arctic. This is a mechanism to foster Nano-tech to a level it can extrude structure.

      • Chris Ho-Stuart

        EFS_Junior, you can calculate a global mean temperature if you like; though one has to be careful to indicate what one means by that. It’s physically a bit more problematic than a global anomaly. Still, if you want such a thing, it can be estimated.

        The NCDC page to which you link explains in the series of questions why the anomalies are used and why they are to be preferred over temperatures.

        If you do want a global mean temperature for some reason, then all you need is an estimate of the global mean temperature over the baseline period. Then you can simply add global anomaly to the global mean temperature at the baseline.

        The table they provide gives you global mean temperatures estimatated for the baseline period. Note that this is not a temperature time series. It’s what you use if for some reason you want to convert from global anomaly to global temperature.

        The global anomaly data is calculated as I described early, as an average of local anomalies, and you don’t use an global temperature anywhere in that process.

        criminogenic: you say:


        “Hence a constant anomaly means a constant temperature.”

        I have never disputed this, …

        Of course you have disputed it. Frequently and at length. You dispute it every time you describe an anomaly as a velocity. Or look at the other thread where you say:

        The positive slope of anomalys show that the rate of warming is accelerating, not that warming is occuring, warming is proven to be occuring by all Annual anomalys simply being positive for the last twenty years.

        The anomalys slope could be flat or negative and warming would still be occuring, as long as the anomalys are positive.

        The truth is that the SLOPE of anomalies is the velocity, and that that slope of temperature is the same as the slope of anomalies. When you understand anomalies, you’ll understand that they are not in any sense at all a velocity. Good luck.

  11. Tamino:
    Now that CRUTEM3 is an obvious outlier, we ought to be asking why. BEST, NOAA, and the true land only average recently released by GISS all agree, whereas CRUTEM3 is way off.
    I’ve got my own simple temperature code, and using either the GHCN3 or the CRU station data I also get a result which falls within the BEST uncertainty range. It’s beginning to look like there is a serious problem with CRUTEM3. Have you tried anything similar?
    My next project is to start comparing gridded decadal averages, to see if the difference is geographically localised.

    • Sorry, I solved it. The reason the BEST comparison shows CRUTEM3 as an outlier is because CRUTEM3 is an average of the hemispheric land averages, and therefore not comparable to BEST. If you do a land area average of the CRUTEM3 cell data, then CRUTEM3 comes into agreement with BEST. I’ve let them know.

      [Response: Very interesting.]

      • Yes, that is interesting. I thought it was because the arctic was underrepresented in CRUTEM3, and the arctic is where a lot of the recent warming is happening.

      • so are you saying CRUTEM3 weights each hemisphere equally, while BEST weights by land area, therefore giving much more weight to the northern hemisphere?

      • There was a paper by Russ Vose and colleagues in 2005 that basically concluded most of the difference between the datsets arose because of the different ways they post-process to get a global average value. Paper here

      • Nei: Yes, that’s correct. The land-only indices produced by BEST and NOAA are not representative of global temperature and should not be taken as such for precisely that reason. That’s probably why neither CRU or GISS produced such a product, until now – the main reason to do it now is for comparison with BEST.
        Peter: That’s very interesting, I’ll take a look.

      • Nope, I was still wrong. The BEST guys are indeed using the hemispheric weighted CRU data, as they should.
        The differences between the BEST and the corrected CRU data are interesting, and lead to some artifacts in the 10-year moving average. Needs more work.

      • I can say from my own experience with a research project that CRU’s method really does tend to minimize potential stations being used for a multitude of reasons. I did a test site where using a Least Squares approach you can maintain 15 stations in the area since 1940 whereas with cru the number is around 4. This is a very topographically diverse region partially including the canadian Arctic so to me it seems clear CRU’s method (CAM) and their data sources (GHCN etc) minimize high latitude station density unintentionally.

  12. Please help me, I’m drawing a brain blank – what is the term for attempting to mislead by providing selective truths? (to include cherrypicking, lying by omission, slanting, saying “the Russian came in second and the American was third from last” to describe a 3-entrant race, etc)

    • I’m sure this can’t be it but I’ll say it anyway because it (still) amuses me: “Being economical with the truth”.

    • The term is “suppressio veri”. (Only telling the truth that’s convenient.) It’s usually paired with “suggestio falsi”. (Which is to falsely suggest that the evidence leads to a particular conclusion.) Both are instances where one can lie by making true statements.

      • Philippe Chantreau

        “instances where one can lie by making true statements”
        Isn’t the other name for that “politics”?

    • To ‘obfuscate’ means to darken the question, to stupify or bewilder the audience. To ‘dissemble’ means to disguise something’s true nature.

    • You might be thinking of something like disingenuous, duplicitous, deceitful, double-dealing, dishonest, dissembling, chicanery, knavish, two-facedness, … Take your pick.

      Personally, it looks to me as if she’s trying to practise the form that is Socratic irony, but for some reason she isn’t very good at it. As to why she isn’t (good at it), I couldn’t possibly comment (after Urquhart).

      • P. Lewis Not so happy with your list. ‘Disingenuous’, lacking frankness, is a good one but the rest are too close to ‘liar’. ‘Chicanery’ would be excellent but it has come to mean simply ‘cheating’.
        A few of some candidates that came to mind over a cup of tea when checked in the dictionary also suffered from new changed definitions. ‘Prevaricate’ is now more to do with avoiding an answer. ‘Equivocate,’ to spread doubt to hide the truth, has more promise. ‘Sophistry’ or ‘casuistry’ – clever arguments to trick people. ‘Paralogy’ – flawed argument. However, they’re all rather obscure.
        ‘Artifice,’ to trick with a deception, is also obscure but does lead to a better known adjective that might just convey the required meaning to describe scpetical arguments – ‘artificial’

      • Me, I’d call it bullsh*tting after the essay by Frankfurter. Frankfurter’s thesis was the BS is worse than lying because it is more difficult to drive a stake through its heart.

        It would fall under Pauli’s category of being so bad it’s not even wrong.

    • Why not invent a new word, e.g the verb could be “Moncktonize” or the noun could be “Singerism” or ” Curryism”?

    • Advertising. :-)

    • Horatio Algeranon

      what is the term for attempting to mislead by providing selective truths? “

      There may actually not be any such term in English.

      There are many words in the language related to “lying” and to “liars”, but “providing selective truths” is not technically “lying”.

      But there does not seem to be a single (non-compound) word for “truth-telling” (or “truth-teller”), so it would at least be coinsistent that there would also not be a single word for “selective truth-telling” or “half-truth-telling”.

      It’s interesting that there is no word for truth-teller but so many words for liar. Not sure whether that is a good thing or a bad thing.

    • An essay on the many ways we lie from early ’90s Utne Reader may have the word you are looking for:

  13. I think MR. Beacon is on to something. I do think we will start to see emissions from western nations start to deline, not because of good government or mediating programs, but simple economics. The EU is disassembling and will continue until debt levels can be serviced and that means high unemployment, reduction of programs and no money for investment. The grand socioeconomic experiment is kaput and the Brits were proven right. The US, after quantitive easing, will still face unemployment of over 10% in 2012 and is now dealing with a trillion dollar hangover. Less money in the system for individuals to spend, less manufactured goods required, less energy required, less green house gasses released. And I’m optimistic.

  14. anna,

    It’s called “not telling the whole truth,” or “withholding crucial information,” and in some cases, can get you indicted for perjury.

  15. Attempting to mislead by providing selective truths is called propaganda.

  16. On cherry-picking and anecdotal information, there was an article in SciAm which proposed that humans are more prone to be impacted by stories with individuals in them over stories that just refer to overall statistics. I suspect there is some truth to this; remember the girl in the red coat from Schindler’s List?

    I was thinking this is one of the problems with people accepting the global warming story; it is all about the stats in aggregate.

    In an odd reversal, one of the commentators said exactly the opposite, that was the problem with the evidence for global warming, it was all anecdotal, and that is why he didn’t believe in it. (Which was oddly agreeing with the article, while at the same time saying the opposite.) Not sure how to argue with someone when you are trying to showing them a forest, and they complain that all you are showing them is a bunch of trees. I don’t think you can, because it comes down to convincing them that they are the delusional one and not you.

    Maybe you just point out the logical fallacy to others and move on.

    • Pete Dunkelberg and I were discussing this on RC last week a bit; I think we ended up agreeing that both the generalized ‘big picture’ and specific examples were needed for the most effective and persuasive communication.

      Naturally, that still doesn’t convince everybody–especially the determined denier.

  17. Well, it’s been close to 3 and 1/2 months since the CRU released all of its “climategate” raw temperature data. Deniers had been screaming (and filing nuisance FOI demands) for that data over the previous couple of years. Now that they’ve had it in their hot little hands for several months now, have they done anything with it? Just askin’….

    IIRC, the Muir Russel folks were able to generate results that verified the CRU temperature work in just a couple of days. So, what seems to be holding up the deniers?

    • Philippe Chantreau

      They know what they’re going to find and they don’t want to find it. Data are only useful to them when they can complain about it, not actually use it for analysis.

  18. I’ve cut-and-pasted a section of my own reply in the last thread below (In the discussion below, will try to resolve what I see as an offset bias that appears to exist between the BEST and NCDC land only temperature datasets);

    So, for example BEST states in Full_Database_Average_complete.txt that;

    “Estimated 1950-1980 absolute temperature: 7.11 +/- 0.50″

    Similar numbers exist for NCDC at;

    Now the baseline period for the BEST is actually 30-years, 1950-1979 (my convention would be that the time period is start/end year inclusive).

    The baseline period for NCDC is 1901-2000 (or 100-years), and the land only annual anomaly for this base period is 8.5 degrees C (I’m assuming that the temperatures listed are from GHCN-M version 3).

    Now when dealing with annual absolute mean temperatures applied to annual anomaly time series, it’s a simple matter of just adding these two numbers to their respective annual (or even monthly, although we lose the respective monthly anomaly curves in doing so) time series (in other words, the dataset just one step prior to subtracting out the anomaly base period values, that we normally see as just the anomaly time series).

    As I’ve already looked at the difference in the NCDC (version 3 from and BEST monthly anomaly datasets (e. g. BEST – NCDC (land only for both, zero mean for both)), and for their common base period (1880-01 to 2010-03), the difference in linear trend lines is only 0.0422 degrees C/century. Pretty much insignificant AFAIK.

    My problem?

    The difference in the average absolute temperature anomaly values for these two time series (7.11 degrees C for BEST vs 8.5 degrees C for NCDC) leads to a bias offset of 8.5 – 7.11 = 1.41 degrees C (NCDC – BEST, note this remains essentially the same for the maximum common time period of 1880-01 to 2010-03). Note that no claim is being made as to one or the other dataset being biased high or low (to begin with, just that there is a bias offset between these two time series based on available published data).

    I am also not exactly sure if the current BEST analysis uses the same network of temperature stations as NCDC (or subset thereof, in the current draft implementation).

    Anyways, the conundrum I’m currently having is: What would be the most likely explanation for the 1.41 degree offset bias in absolute temperatures for these two time series?

    My initial conjecture would be that the NCDC land and ocean masks, or gridded temperatures, when segregated into land and sea areas, contain some land grid cells which include some fraction of ocean temperatures (the NCDC SST ocean mean is 16.1 degrees C, while the combined mean (global) is 13.9 degrees C). The converse would then also be true for the ocean temperatures (some land areas are included in the ocean only estimate).

    I’ve used the following temperatures (C) for NSDC (Land, Ocean, Global); 8.568, 16.059, and 13.867 (vs the tabulated values of 8.5, 16.1, and 13.9).
    I’ve used the following temperatures (C) for BEST (Land, Ocean, Global); 7.168, 16.638, and 13.867 (vs the tabulated value of 7.11, middle value calculated as weighted average, global mean same as NCDC).
    Land fraction of 29.3% and ocean fraction of 70.7% (sum = 100%).

    Now for an assumption of, on average, a land/ocean cell that is 50% land and 50% ocean (with a mean temperature temperature of (7.168 + 16.638)/2 = 11.903, for BEST to agree with NSDC (i. e. to remove the bias offset between NSDC and BEST), I calculated that 17.3% (split evenly between land and ocean, or 8.65% for each) of the total global area of the NSDC (5 by 5 degree) grid cells would be composed of land/ocean cells.

    This results in 7.168 * (29.3% – 8.65%) + 13.867 * (70.7% -8.65%) + 11.903 * 17.3% = 13.867 degrees C (using %’s as fractions or dividing by 100%), or the global average that I’ve assumed would be the same if the bias offset was removed to make BEST agree with NCDC global mean temperature.

    So the next question, I guess, would be: Is the NCDC grid composed of approximately 17% combined land/ocean grid cells? At first glance anyways, this number does not seem to be too unreasonable. And yes, I know that a 5 by 5 grid is not equal areas (another bias to consider I suppose).

  19. Oops.

    The 2nd to last paragraph in my previous post should read;

    “This results in 7.168 * (29.3% – 8.65%) + 16.638 * (70.7% -8.65%) + 11.903 * 17.3% = 13.867 degrees C (using %’s as fractions or dividing by 100%), or the global average that I’ve assumed would be the same if the bias offset was removed to make BEST agree with NCDC global mean temperature.”

    Or in english: NCDC land contribution + NCDC ocean contribution + NCDC land/ocean contribution = NCDC global mean temperature (using the respective BEST temperatures values to arrive at the same NCDC global mean temperature of 13.867 degrees C).

  20. Can somebody here recommend a good book for the introduction into statistics?

    I can follow the main argumentation here very well, but lack the background

    • Statistical Inference by Joliffe and Garthwaite?

      Probability-the Logic of Science by Ed Jaynes?

      Online, you can find brief intros to many subjects at Mathworld or even Wikipedia–and some of them ain’t bad. I also like the NIST Engineering Statistics Handbook:

    • Worth taking a look at Bayesian Data Analysis, by Andrew Gelman.

    • Jan,

      If you want a good practical book on statistics that cover the essential topics try the Schaum’s outline for Statistics by Spiegel and Stephens. It has many solved examples and is designed for a quick pick up of the material. It has a small amount of time series analysis. It is pretty cheap too.

      If you want a good intro to applied time series analysis try this link to the University of Arizona class lecture notes. They hit the major topics very well and it uses a climate based of tree ring studies throughout. It is very well done.

      The first pdf is on matlab for the course. The pdf’s 2 -13 will cover the times series topics.

  21. Jan,

    You might try Noise by one Grant Foster.

    [Response: That’s not really a statistics book, it’s about the misuse of statistics by those who deny the reality of global warming.]

    Unfortunately, the books devoted to the statistics of climate are pretty specialised. I have not bought one myself yet – books like

    But maybe Tamino might recomment one. This is the best site I know on the topic, so a search through old posts might be rewarding.

    • I would suggest that for most people, books that help one recognize bad statistics like “How ot Lie with Statistics” or “Noise…” are far more valuable than books that explain how to do statistics well. Relatively few people will actually ever do much of the deeper statistical analysis … but everyone needs to protect their brains from junkstat.

      • junkstat. First time I’ve seen that. Very nice. :)

      • Horatio Algeranon

        There’s a great book called “Chance, Luck and Statistics” (by mathematician Horace Levinson) that was written specifically for the non-mathematician.

        Talks about basic probability (particulalrly as it relates to games of chance like poker, roulette, lotteries, craps, bridge) as well as basic statistics, including a whole chapter on “fallacies in statistics”.

        It’s an old book (originally published in 1939 and updated in 1950!), but really quite good for someone who has little or no previous exposure to probability and statistics — and there’s actually a lot in the book for those who have, as well (Levinson even corrects some mistakes in Hoyle’s poker tables so if you’re into gambling you might want to take note!)

        It’s available on Amazon (Cheap, too @$12)

        Buuut, if you’re looking for something on least squares, autocorrelation and the other stuff Tamino regularly taks about, the book doesn’t have any of that (though there is a very interesting section that attempts to predict future “foot race” records!)

  22. Tamino,

    I know it is an endless and arduous task, but this myth being propagated by Tisdale needs refuting. Pielke Snr. thinks it is good enough to publish in a peer-reviewed journal, although he does not say which one, E&E maybe? ;) Tisdale concludes, “The process of the El Niño-Southern Oscillation was responsible for most of the rise in global sea surface temperature anomalies over the past thirty years.”

    • Why can’t anyone (=Peilke, *grin*) write Nielsen-Gammon’s last name correctly?

      • Never mind the spelling, whatever happened to their grammar? “Bob Tisdale’s Response To The E-Mail Interaction Between John Neilsen-Gammon And I links to “Roger Pielke Sr., has published at his blog a series of emails between he and John Nielsen-Gammon…” Hurts.

    • Lord he was born a ramblin’ man …

      Playing with his … err, many ocean indices.

      After dozens of graphs, scaling things with inconsistent units, offset over here, phase shift over there, rotating things in between, correlation coefficients sometimes given or but most times not, confidence intervals rarely given, shifting things left and right, up and down, and then back through it all again, ad infinitum, ad nauseam.

      Damn that correlation does not equal causation thing anyways, he’ll just drop all those correlation coefficients and confidence intervals, don’t need them anyways.

      And all the while not a single equation, like continuity, or momentum, or energy.

      So after all the handwaving is over and done with, the oceans all moved there heat around in some sort of astrological dance, and dang nab it, they got hotter, all by themselves.

      But in the end, most of us know a magic trick, when we see one.

      It’s like looking through a kaleidoscope, oooohing and aaaawing, all the while.

      Does a magic trick have any predictive value, when it’s always shape shifting to constantly hit its moving target?

      • Horatio Algeranon

        Second verse, same as the first, a little bit louder and a little bit worse:

        “Lord, he was born a dissemblin’ man”
        — by Horatio Algeranon
        (with a little help from the Allman Brothers and Dickey Betts)

        Lord, he was born a dissemblin’ man
        Trying to make a flat-line and doing the best he can
        When it’s time for deceivin’, I hope you’ll understand
        That he was born a dissemblin’ man

        His father was a Viscount down in Brenchley
        He wound up on the right end of a sum
        An’ he was born in the back seat of a Mercedes Benz
        Rolling down highway A-21

        Lord, he was born a dissemblin’ man
        Trying to make a flat-line and doing the best he can
        When it’s time for deceivin’, I hope you’ll understand
        That he was born a dissemblin’ man

        He’s on his way to Georgia Tech this morning
        Leaving out of Kent in the Ol’ King-dom
        Always having a good time down on the Mann oh, Lord
        Them Georgia women think the world of him.

    • MapleLeaf –
      Thanks for the link. Without it, I would not have seen Bob Tisdale’s post on WUWT and would not have scrolled down through the comments to learn the following:
      “There is no evidence that this summer in Texas was any “hotter” than other hot summers of the last century. Rather, the average temperatures were higher simply because the heatwave lasted longer.”

      • Isn’t that special.

      • Oh, ouch. That’s not just weapons-grade stupid, that’s neutron-star stupid.

      • If I weren’t laughing so hard, I’d cry. This happens to me so often when I read WUWT and Curry comment streams that I’m close to declaring Dionysus as the official God of Climate.

        Someone should collect these little balls of shit and produce a monthly climate version of “56 BC and all that.”

  23. MapleLeaf, Hasn’t Tamino already put paid to that BS? When one accounts for ENSO, volcanism, solar fluctuations , etc–you get a remarkably consistent upward trend

  24. Climate is concerned with averages, but drought, drought flood gives a nice average. That is not the whole story, timing can be so critical. The West Australian cereal crops have just been adversely affected by good soaking rain that was only a little late.

    There is evidence of the miss timing between flowering and insect hatching.

    Unfortunately I have no idea how to approach the statistics , but it does make the future look that little bit worse.

    [Response: Climate concerns the mean and variation of the weather — so it’s also about the extremes — how often they occur, how extreme they are, and how they tend (on average!) to show patterns over time.]

    • Unfortunately I have no idea how to approach the statistics…

      Perhaps one way would be to derive various indices of climatic impact and indices of phenological impact, where the index for each considered parameter is computed such that its value lies between 0 and 1, somewhat after the fashion of a probability. Values approaching 0 might indicate extreme impact, whilst values close to 1 might indicate little impact.

      If one could establish some consistency/comparability of represented value between different parameters they could perhaps even be multiplied to give a product, with appropriate degrees of freedom, that represents a composite index.

      Good luck playing with the notion, though… ;-)

  25. The temperature concern is well documented, e.g.
    “There are several reasons for potential yield reductions in irrigated corn due to extreme heat, and all of them are somewhat related…. Extreme heat during early grain fill. Research at Iowa State University has shown that extreme heat during grain fill reduces the rate and duration of grain fill, even if moisture is not a limiting factor …. The ideal temperature for grain fill in corn is about 72° F.

    Field studies and observations indicate a yield loss can occur at higher temperatures. At temperatures from about 80° to 95° F, the rate of grain fill starts to slow down, even if roots are kept at normal soil temperatures from the shading of the canopy. At temperatures of about 105-110° F, the rate of grain fill and duration of grain fill are both reduced, resulting in yield losses….”

  26. and
    Evidence from US Corn and Soybean Yields
    “… Extreme heat is the single best predictor of corn and soybean yields in the United States. While average yields have risen continuously since World War II, we find no evidence that relative tolerance to extreme heat has improved between 1950 and 2005. Climate change forecasts project a sharp increase in extreme heat by the end of the century, with the potential to significantly reduce yields under current technologies….”

  27. Tamino
    Do you have any inside info or know someone who would regarding the following from Pielke Jr’s site?

    At the BBC Richard Black says that he has a copy of the forthcoming IPCC extremes report and shares some of what it says prior to being considered by governments this week:

    For almost a week, government delegates will pore over the summary of the IPCC’s latest report on extreme weather, with the lead scientific authors there as well. They’re scheduled to emerge on Friday with an agreed document.

    The draft, which has found its way into my possession, contains a lot more unknowns than knowns.
    He describes a report that is much more consistent with the scientific literature than past reports (emphasis added):

    When you get down to specifics, the academic consensus is far less certain.

    There is “low confidence” that tropical cyclones have become more frequent, “limited-to-medium evidence available” to assess whether climatic factors have changed the frequency of floods, and “low confidence” on a global scale even on whether the frequency has risen or fallen.

    In terms of attribution of trends to rising greenhouse gas concentrations, the uncertainties continue.

    While it is “likely” that anthropogenic influences are behind the changes in cold days and warm days, there is only “medium confidence” that they are behind changes in extreme rainfall events, and “low confidence” in attributing any changes in tropical cyclone activity to greenhouse gas emissions or anything else humanity has done.

    (These terms have specific meanings in IPCC-speak, with “very likely” meaning 90-100% and “likely” 66-100%, for example.)

    And for the future, the draft gives even less succour to those seeking here a new mandate for urgent action on greenhouse gas emissions, declaring: “Uncertainty in the sign of projected changes in climate extremes over the coming two to three decades is relatively large because climate change signals are expected to be relatively small compared to natural climate variability”.

    It’s also explicit in laying out that the rise in impacts we’ve seen from extreme weather events cannot be laid at the door of greenhouse gas emissions: “Increasing exposure of people and economic assets is the major cause of the long-term changes in economic disaster losses (high confidence).

    “Long-term trends in normalized economic disaster losses cannot be reliably attributed to natural or anthropogenic climate change.”

  28. B Buckner, always, always, always go to source


  29. Richard Black “It’s also possible to argue that extreme weather isn’t really the issue for the small island developing states,”

    November 9, 2011
    The tiny island nation of Tuvalu has been crippled by drought and may be just the first island nation to run dry.

    Well anything is possible.

  30. Jan – It’s good of Ray to advertise Garthwaite, Jolliffe & Jones, Statistical Inference (2nd edition, 2002), but it’s not an introductory text. There are many of the latter, and you probably want to find one slanted towards your discipline. After you’ve mastered the basics, by all means go to GJJ.

  31. Ian, While I agree that your text is not elementary, I would not underestimate the importance of its clarity.

    The problem I have in answering Jan is that I don’t really know of a good clear text that is readable and takes an applied approach that would help Jan follow the arguments here. Do you?

    The clearest probability text I know of is Jaynes, but I’m not sure it would be suitable as an introductory text. What do you think?

    • Gavin's Pussycat

      Jaynes is work, but worth it. Introductory? Hmmm.

      [Response: Jaynes is a great book, well deserving the praise it gets. But it is *not* introductory to statistics. I strongly recommend a background in basic stats before digesting Jaynes.]

  32. I think that Judith’s and Pielke Snr’s brains are collectively failing. Pielke Snr. has another post up in which he laments that working on decadal model predictions is akin to pouring money down the toilet. He goes so far to say that he agrees with Judith Curry (a very brave thing to do nowadays for anyone who cares about their reputation),

    “I endorse Judy’s recommendation that
    What if we had devoted all of those resources to making better probabilistic predictions on timescales of 2 weeks to 3-4 months?….Anticipating extreme weather events by a week or two, or even a few days,  could make an enormous difference in the developing world”

    Now dial the clock back to February of 2011, when Judith posted on a paper co-authored by her partner, in which they conclude:

    “The European Centre for Medium Range Weather Forecasts (ECMWF) 15-day Ensemble Prediction System (EPS) is used to assess whether the rainfall over the flood affected region was predictable. A multi-year analysis shows that Pakistan rainfall is highly predictable out to 6-8 days including rainfall in the summer of 2010”

    Now the problem is not the model guidance or not investing enough money in short, medium range or seasonal forecasts, but instead disseminating that information, a different problem altogether. I think we all agree that that problem does need to be addressed.

    Curry and Pielke also need to consider the steady improvement in medium-range forecasts, look at the huge improvement in forecast skill for day 10 in the last decade alone visible in the figure on page 7 of ECMWF’s 2010 report.

    Both Pielke and Curry both seem to have forgotten that the ECMWF has just released a new seasonal and monthly forecast system (some of ECMWF’s seasonal products are freely available on the web, as are those generated by the UK MetOffice).

    Are Curry and Pielke so confident in their beliefs that they think people are not going to check up on their assertions, especially after making so many misleading assertions in the past? Well, add another fail to the list.

    I have not even yet checked Curry’s claim that “IPCC saps 50-70% of the total manpower time and resources of the modeling center”. I think it unwise for Pielke to believe that and then apply it to all modeling centers.

    They can also save their compassion trolling about people at risk of extreme weather in developing countries for someone else.

    • Pete Dunkelberg

      “Are Curry and Pielke so confident in their beliefs that they think people are not going to check up on their assertions,….”

      Why would they expect that? They don’t do it themselves. ;)

    • regarding: “I endorse Judy’s recommendation that
      What if we had devoted all of those resources to making better probabilistic predictions on timescales of 2 weeks to 3-4 months?….Anticipating extreme weather events by a week or two, or even a few days, could make an enormous difference in the developing world”

      Of course she think money should be devoted to this work… not for the developing world, however, but for CFAN

      • havinasnus,
        Part of the problem with Judy’s suggestion is that it is a red herring. The same infrastructure needed to understand climate would improve long-term forecasting of weather. In fact, long-term weather forecasting is a much more challenging problem–both in terms of the data needed and the computational demands. I’m not even sure whether Judy is being disingenuous of if she really doesn’t understand this.

  33. Larsen & Farber is a good introductory text for stats.

  34. MelaTonin,
    Larsen and Farber looks good (based on examination of the table of contents), but I’m not sure how much it would help Jan follow the analyses here. That is my problem–I don’t know of an introductory text that looks at time series, autocorrelation, modeling,etc.

  35. I put together a multiple regression primer for myself once… Maybe I should post it to my web site. I’ll have to look at it again.

  36. @Ray Ladbury

    Try “Introduction to Econometrics” by James Stock and Mark Watson.
    The entire book is great. Chapters 1-4 is basic stats, Chapters 5 – 13 details all the main regression models (OLS, TSLS, Panel effects, IV, Binary dependent variables etc) and the last few chapters deal with time series data; from the basic autoregression models all the way to VAR. For anyone following the unit root discussion, it gives a pretty good explanation of that as well (and why the presence of it means that OLS and AR models can’t be used due to a assumption of the models not being true).
    I’m just surprised that econometrics is so late to the Climate Change party. Has some very useful tools for analysing time series data.

  37. Jan,
    also as you might expect, the ‘… For Dummies’ series has a statistics title which many libraries would have (suggestion only, not a recommendation).

  38. Slight change of subject, I noticed a recent post in my “pet blog” at Forbes brought up “fat tails”. I recently read Tamino’s old post about this and not being very smart, don’t understand too well, but my impression was that any feedback calculation leads to these fat tails.

    The post at Forbes was about a “soon to be published” article which using some i think paleo data, eliminated the fat tails(similar way to a 2009 article by Annan and Hargreaves?). Can someone fill me in on the merit/method of this kind of analysis?

  39. Jan, Ray
    Sorry I don’t have any good suggestions. You probably need different texts for introductory probability, statistics, modelling, time series, multivariate. Several of the books on my shelves that I taught from before retirement are out of print. Just one recent suggestion: Cryer & Chan (2009) is not bad for time series.

    • Thanks Prof. Jolliffe! I’ve ordered that.

      [It’s refreshing to read a blog where serious experts pop in, instead of legions of D-K afflictees.]

  40. OK, so I wandered on over to check out the latest at WUWT (so that the rest of you out there don’t have to).

    Fred Singer just put in an appearance, and he’s *still* pushing the same outrageous lies that he was pushing a decade ago. Here’s an egregious example (linky

    But the main reason I have remained a skeptic is that the atmosphere, unlike the land surface, has shown no warming during the crucial period (1978-1997), either over land or over ocean, according to satellites and independent data from weather balloons. And did you know that climate models run on high-speed computers all insist that the atmosphere must warm faster than the surface — and so does atmospheric theory?

    In spite of the fact that the satellite/weather-balloon results showing no warming were the result of well-documented computational/processing errors (like the satellite-drift sign error), and that those results actually show warming when the errors are corrected, Singer is *still* pushing this lie. And the WUWT-droids are still lapping it up. Unbelievable!

    (For those of you who are thinking of checking out WUWT for yourselves, take care to secure your hot beverages first. Nasal-passage scalds can be quite painful.)

    • Gavin's Pussycat

      > (so that the rest of you out there don’t have to)

      Very much appreciated g2 (you don’t mind me calling you g2, do you?)

  41. NASA GISS make available the data on forcings used in the GISS global climate models, here:

    This shows how the net forcing has changed since 1880 – it’s the forcing relative to the 1880 value. However, whether the world is warming or cooling at any particular time depends upon the radiative balance at the time, not the forcing at that time compared to 1880, so I was wondering if it’s possible to see a graph of the radiative balance (or radiative IMbalance) over the same period. Presumably it ought to be steadily positive from about 1910 to 1940 (because the world was warming then), slightly negative from 1940 to 1970 (because the world was cooling slightly then), and positive and rising since then (because we have seen an accelerating warming trend).

    Does anyone produce such a graph, and if so, how well does it correlate with the temperature trends over this period? I would have a go myself, but I’m not sure I would know how to figure out what the radiative imbalance ought to be at any particular time.


  42. Two more tidbits of information on the 2009 cyber-attack on CRU — if anyone’s still interested in this stuff:

    — frank

  43. Stefan Rahmstorf recently wrote a piece for ABC ENVIRONMENT to make us (non-science people) aware of the alarming fall in the level of sea ice in the artic this year, comparable to the low levels in 2007. Cohenite has posted this in response:

    “Arctic sea ice was much less in the immediate geologic past:
    The Arctic was warming quicker in the 1930’s:
    The Arctic temperature has been falling since 2005 and according to all the Arctic ice measurements Arctic ice has been well above 2007 levels for most of 2011 and according to NORSEX is still above 2007 levels in both area and extent:
    Rahmstorf has form in being hysterical about the AGW hysteria; perhaps one of his many measured critics can be given a right of reply. I would suggest Dr David Stockwell”.

    I don’t have the ability to analyse whether what he’s saying has any merit, which I doubt, considering the bad record of deniers to get anything right. Would anyone care to give a rebuttal to that response.

    • Give him points for less-dodgy sources than usual, but none in interpreting them.

      The first simply doesn’t support the conclusion he draws from it, since the study area was limited to the Western Chukchi, a small fraction of the entire Arctic Ocean. To be sure, other studies suggest that there have been times where total AO ice extent may have been lower than at present. But so what? Is there a sensible conclusion to draw from the possibility that what’s happening now isn’t without precedent? After all, we don’t know what the outcomes were of previous low ‘excursions,’ nor the time scales involved.

      The second is a big ‘so what.’ Again, what conclusion can we draw from the comparison of warming rates between the two ‘warming periods’ in recent Arctic history? We know that the warming in the early 20th century did not continue uninterrupted, but if the idea is that the warming rate predicts the future trend somehow, no support for such a concept is given.

      The third is a big old cherry-pick; other measures of extent and area show 2011 tracking 2007 pretty darn closely (see for example Cryosphere Today graphs.) And in any case, it’s a big change of subject: the decline in ice *volume* is quite amazing (and troubling.) And that’s what Dr. Rahmstorf was discussing. If you can’t rebut. . .

      For reference, a very good collection of sea-ice related graphs may be found at Neven’s Arctic Sea-Ice blog, here:

      The decline in volume is causing folks to come out and say “Hey, it looks right now as if Maslowski may have been right.” (Dr. Maslowski, of course, made the much-scorned prediction that we’d see a sub-1 million km2 minimum by 2016, plus or minus 3 years.) Most recent was Dr. P. Wadhams, of Cambridge, who made some comments to that effect during a panel discussion that received a little play:

      So, in the doubtful case that there is any ‘hysteria’ involved, it is far from limited to Dr. Rahmstorf.

      • Kevin,
        That is a good point about the McKay et al. (2008) paper being only for a small portion of the Arctic. In contrast, Polyak et al. (2010) was for the whole Arctic basin. Deniers like Cohenite are cherry picking again, now that is alarming (but not surprising).

    • Ig,
      The short answer is that Cohenite is in denial and has a history of misinforming on the web– he is reinforcing his denial here and misleading others. More importantly, he has demonstrated that the cryosphere is in fact very sensitive to warming. That is hardly reassuring.

      A good start to refuting his cherry-picking, misinformation and red herrings this site and Also look at Polyak et al. (2010) and Miller et al. (2010) [Google will help you find them].

      Regarding rates of warming:

      “The Arctic temperature has been falling since 2005”

      He must be referring to the summer temperatures from the DMI data which is in fact reanalysis data (not observations) for north of 80N. But the Arctic starts near 66 N. A trend calculated using 7 data points (2005-2011) in meaningless, once cannot extract a signal from such noisy data using only 7-years worth of data. The same applies to the Arctic sea ice data, but the long-term trend in sea ice volume and extent remains sharply down (as Tamino has shown on this site) and even shows signs of accelerating with a quadratic fit providing the best fit to the data not a linear fit. He seems to be cherry picking data points and short periods which support his belief.

      Good luck!

    • Adding to the points made by Kevin McKinney & MapleLeaf, I’d reckon Cohenite is working off an agenda rather than the evidence. If temperatures in the Arctic have been falling since 2005, why was 2007 such a bad year for the ice? And after four more years of Cohenite’s falling temperatures, why has 2011 which has been such an unexceptional year broken the Arctic Sea Ice Extent record?
      Cohenite simply uses language which is incompatable with the evidence he waves.

  44. So much for those who insist the data was leaked by an insider rather than hacked Frank. Have you considered putting together all the evidence that it was a sophisticated hack?. I believe individual pieces of evidence aren’t that strong but together they are overwhelming.

    Unrelated question: Does anyone know of a good database of station recorded wind speeds and directions? Id like to see how wind speed and direction relates to Central England Temperature. It’s been very warm for the time of year recently and this coincides with winds coming from the continent. Typically when we get wind from the North we get very cold spells and when wind comes from the continent we get warm spells. Of course it won’t be that simple but I thought it would be interesting to compare CET anomalies over time with wind direction. Perhaps even some of the short-term noise can be removed that way.

  45. Trousers:

    So much for those who insist the data was leaked by an insider rather than hacked Frank.

    Gavin’s discussing the hacking of Real Climate, not CRU.

    However, I think it is reasonable to assume that the fact that the data was made available and RC hacked in a very short window of time indicates that the same people were involved in both, and I doubt a whistle-blower would illegally hack into RC, since such hacking is a crime here in the US. It seems likely that the same people criminally obtained the data and then hacked RC and published it there, as well as posting to the Russian ftp site.

    • Gavin's Pussycat

      Yeah, the idea that CRU was whistleblown and RC hacked reminds me of the ‘compromise theory’ that one of the Twin Towers was taken down by an aeroplane while the other was dynamited by Mossad… (I think xkcd came up with this)

  46. Since this is an open thread, I’ve got a question I haven’t seen addressed.

    Skeptics like to ask how long we would need to see a flat temp record to undermine support for AGW, i.e. indicating that recent decades warming was just natural variability.

    But doesn’t natural variability require that the temp record go back down to the baseline, and maybe below? Were we to have a sufficiently long period of flat temperatures, it might raise doubts about some of our theory, but it doesn’t seem to indicate variability as the solution to that concern. Some unknown buffering mechanism would more likely be indicated were for that hypothetical flattening.

    • Actually a very good question. But perhaps also the wrong question, as IMHO, the global surface temperature record has been overplayed to a certain degree. ;-)

      So far all I’ve looked into is the BEST monthly land surface temperature record (using only OLS mind you).

      It is quite likely that a base period of say 30 years, or even 40 years, based on past SAT time series, could happen at some future date, showing little or no significant global temperature trend.

      Since 1998 has already cherry picked as a start year, we are currently at +13 years in that particular run length. Future and past temperature time series, outside the period of record, suffer from the end point problem, meaning that trends at either end of a time series have to have bigger error bars, we only have half the information at either end as we do in the middle, for example.

      Dr. Ben Santer says 17 years for anthropogenic, most climate scientists say 30 years to determine a climate trend, but in either case, the global surface temperature record may not tell us much about WHERE on the globe temperatures are changing the most.

      This is also why I tend to downplay anomaly time series, useful for trend but not much else, going from -50 to -45 degrees below C (2.24% increase at the South Pole), or from +35 to +41.9 degrees C (2.24% increase at Dallol, Ethiopia), or from -2.5 to +3.6 degrees C (2.24% increase at many locations on the globe, but for the sake of argument, call those places Magic, Canada or Magic, Russia) (or stays below freezing, stays above freezing, or goes from frozen to melted), of those three, I’d go with the transition temperature of water (from solid ro liquid) as being most important.

      Note that I don’t think we would ever see a uniform increase across the globe (that would really be anomalous), just an example, colder placed are more likely to warm than warmer places are likely to warm AFAIK.

      GHG’s acting as a thermal blanket over the long term (say century time scales) will tend to bring the global temperature distribution down.

    • Michael Hauber

      Climate Impact of Increasing Atmospheric Carbon Dioxide by James Hansen states that natural variability could cancel Co2 warming for periods of up to about 20 years. This was written in 1981.

      A Realcimate article in 2008 states a similar time period.

  47. Question – what is the search of “22,000 academic journals”? DeSmogBlog and SourceWatch refer to this, but (AFAIK) don’t name it. ( Is it Google Scholar? ISI web of knowledge? something else?)

    (also, thanks to all above for the many fine lexical offerings, to describe aiming to deceive while uttering only truths.)

  48. And Now for Something Completely Different

    Dr. James Hansen’s growing financial scandal, now over a million dollars of outside income

    My comment over there;

    Another Wikipedia link;

    “Barratry also refers to persistently inciting others to engage in litigation or other disputes or quarrels outside of the courts.”

    So is ATI and WUWT? engaging in barratry? As in yelling fire in a crowded movie theatre or trying to insite a riot.

    And where is the Wikipedia entry for “American Tradition Institute” anyways?

    I could have sworn I saw something on ATI at Wikipedia in the recent past, or maybe it was somewhere’s else?

    Anyways, ATI needs to be “outed” to the maximum degree possible (calling Deep Climate and/or John Mashey).

  49. trousers:

    I do intend to draw up a diagram summarizing ‘just the facts’ on SwiftHack based on evidence from and the inactivists themselves; and possibly another diagram that also includes Gavin Schmidt’s statements.

    — frank

  50. Hey, red alert folks, looks like the misinformers may be trying to pull a repeat of 2009:

    • Exciting! Maybe this time there will really be something of substance there that proves scientific fraud! All I’m seeing right now is ‘the cause’. Ooooh! That could be anything, so it must be something!

      What restraint and patience the swifthacker must have to not immediately release definitive proof of the worldgovernmenthoax. Any day now. Maybe just before the next climate conference.

      • Oh, I’ll put money on there not being anything of substance. But I’ll also put money on the emails being spun out of all proportion and context by cretins and the result being delays of substance when it comes to addressing climate change.

        Which makes me an extremely pissed off bunny.

    • Oh great – more ‘man writes email’ hysteria for the next six months/years.

      Not seen ’em but the readme sounds wonky. Almost as though the releasee believes releasing 1000s of stolen CRU emails just before a major climate conference will encourage international cooperation to tackle emissions and won’t be spun out of any context that’s been near reality by the GWPF’s chief sports scientist and his mates.

      • I don’t know who discovered it first, but the Readme puts the final nail in the “it’s a brave inside whistleblower” coffin. Notice how they wrote 5,000 and 220,000 ? “5.000” and “220.000”.
        Only people who do not speak fluently english like native people or scientists make that kind of mistake.

    • Looks like Gavin will spend another Thanksgiving writing on RC and another publication for Mosher and Fuller.


    “From O’Hare et al., in Biomass and Bioenergy, 35:10 4485-4487:
    “Indirect land use change for biofuels: Testing predictions and improving analytical methodologies” by S. Kim and B. Dale, presents a principal inference not supported by its results, that rests on a fundamental conceptual error, and that has no place in the current discussion of biofuels’ climate effects. The paper takes correlation between two variables in a system with many interacting factors to indicate (or contraindicate) causation, and draws a completely incorrect inference from observed sample statistics and their significance levels.”

    • Are you surprised? If correlation has ever meant causation then this paper must be seriously flawed because it’s published in E&E.

    free, online course, at Stanford“probabilistic+graphical+model”+”climate+change”

    this stuff:
    Exploiting simulation model results in parameterising a Bayesian …
    … network is a probabilistic graphical model, where … case study addresses the impact of climate change … of the Intergovernmental Panel on Climate Change …

  53. Hmm, Anybody else remember a comment by Tony “Micro” Watts around the time of the BEST release to the effect of “they don’t know what I know”. I wonder what Mr. Watts knew of this email release and when he knew it…

    It sort of has his low-Wattage signature to it, doesn’t it?

  54. Deech, I don’t frequend WTFUWT as I have no desire for my head to explode from exposure to excess stupidity. I prefer to think of it as an asylum where the inmates can engage in their delusions without disturbing the sane.

    • I believe the term “epistemic closure” would apply to that place.

      • “Epistemic closure”–how elegant!

      • Is that another term for “dumber than a box of rocks”?

      • In the “box o’ rocks” category, Watts posts a set of scandalous emails from one dated 2001 and concludes that the World Bank is the secret puppet master behind the IPCC. I almost had enough of a heart to not point out the Robert Watson of the Wold Bank was Chair of the IPCC until 2002, almost, but not really.

      • Yeah, actually. One that tamino himself may have coined: “Dumber than a bag of hammers.

      • I can’t claim credit for the term – I think it was Julian Sanchez.

        One of the more striking features of the contemporary conservative movement is the extent to which it has been moving toward epistemic closure. Reality is defined by a multimedia array of interconnected and cross promoting conservative blogs, radio programs, magazines, and of course, Fox News. Whatever conflicts with that reality can be dismissed out of hand because it comes from the liberal media, and is therefore ipso facto not to be trusted. (How do you know they’re liberal? Well, they disagree with the conservative media!)

      • That article I linked to above is very interesting upon a re-reading. It confirms that Spencer is ideologically driven, and that if you check his calculations closely, there is invariably something fishy in there. But we knew that already…

  55. bratisla-san,

    In Europe they use a comma for a decimal place and vice versa, the reverse of the format in the states.

    • The point, of course, is that the English don’t.

      • I had one retort to that by someone using a French keyboard to make life easier dealing with the Euro (probably works in The City). If so, though, I’m sure things just got easier for Norfolk Constabulary ;)

      • Oh, working internationally within the Euro, sure, a UK banker may play along, But note that in your quote, you’re saying the English guy changed his notation to “make life easier dealing with the Euro”.

        The French not long ago (1-2 years) lost the battle to keep French the second official language of the EU (English being the first). Not sure if it’s come into effect or not.

      • Shorter response: the English don’t when dealing with the English. The UK not having adopted the Euro etc etc … she’ll deign to speak “foreign” (“.” v “,”) when she’s dealing with foreigners …

  56. Horatio Algeranon

    “Letters from Santa”
    –by Horatio Algeranon

    “The CRUtape Letters” was a hit
    And now we’ll get a sequel to it
    Piltdown Mosher, with his tales,
    Can take advantage of Christmas sales.

  57. The links in Curry’s new post show her own biases pretty clearly:

  58. Just to point out, someone in British Government seems to be fighting for the future.

    * Chris Huhne: a new global climate change treaty is not a luxury
    * Chris Huhne blasts Lord Lawson’s climate sceptic thinktank

    • Yes, Chris Huhne seems to be single handedly salvaging some shred of credibility on the (very low bar) claim to be the Greenest Government Ever.

  59. Leaked climate emails force carbon dioxide to resign

    CARBON dioxide has resigned from being a gas, it has been confirmed.

    The move came after a fresh batch of leaked emails between climate scientists showed that CO2 had been lying about what it is and what it does.

    According to one of the emails, sent by Julian Cook, a researcher at the University of East Anglia, carbon dioxide had got drunk and admitted it had made the whole thing up.

    Cook adds: “He says he’s not even a gas, never mind a greenhouse gas. He says his name’s Brian and he used to work for Kwik Fit in Norwich…..

    More on this stunning news at the link.

    • Very funny.

      Of course, drilling down, you get:

      “In the meantime we have to go back to our notes and work out what in the name of f*** has been coming out of engines and power stations in ever increasing quantities for the last 150 years.

      “Then we have to see if this thing traps heat in the atmosphere in the same way that Brian did.”

      Shrewd, that bit.

  60. The recent Love et al (2011) paper is a statistical examination begging for your expertise to review. ” Are secular correlations between sunspots, geomagnetic activity, and global temperature significant?”

  61. I’ve been working my way through the Schmittner paper. Anyone else think it’s odd that their model seems to want to turn into a Snowball Earth so readily?

    I also cannot see where they specify their Prior other than to say they are assigning near-zero probability to sensitivity values above 6 degrees per doubling. The only rationalization they give for this is that their model simulations–which seems kind of flimsy.

    It appears to me that this and the low delta in SST (which they also say is questionable) is the driving force for their favored values for sensitivity being lower than other estimates.

    All in all, I wouldn’t take too much comfort in this study.

  62. Ray, I saw a comment somewhere pointing out that these results are deeply alarming if they hold up. Only a 2.2C difference between New York now and New York under a couple of miles of ice means that ‘small’ temperature differences lead to huge changes in land conditions. If sensitivity is low, but consequences are extreme, we’re in even deeper doo-doo.

    Not much joy for anyone with this kind of outcome.

  63. Our default prior is uniform over our simulated range of ECS values (0.26 to 8.37 K). It gives equal prior probability to every ECS value in this range.

  64. Nathan, thank you for your response. I realize that your simulations show runaway cooling if for sensitivity greater than ~6, but given that other studies have found higher sensitivities, do you think it is reasonable to assign zero probability to the Prior for values greater than 8.37? I am reluctant to assign zero probability to any value in a Bayesian Prior, as that value then becomes impossible regardless of the evidence.

  65. We had to truncate the prior somewhere, because we only have a finite set of model runs, and it’s very dangerous to extrapolate the model output beyond the highest (or lowest) ECS run in our ensemble.

    The deeper question, though, is should we have chosen an ECS greater than 8.37 for the highest sensitivity run in our ensemble?

    In my opinion the answer is “no”, post facto. (It’s hard to know in advance how broad to make an ensemble.) Our posterior puts effectively zero probability of ECS above 5 K or so. Therefore it wouldn’t really matter if we had extended our prior range further, since even higher ECS values wouldn’t get any posterior weight either. (At least, not without a very dogmatic prior that heavily favors extremely high ECS values over lower values. A uniform prior wouldn’t permit higher ECS no matter how far you extended it.) I’d worry more if we had an “edge hitting” posterior with finite probability mass near where we truncated the prior.

  66. A slower increase in CO2 would not change ocean pH dramatically as is happening now because the rate of CO2 increase is overwhelming the processes that remove it; that’s going to make some difference in how climate changes

  67. brief quote from the abstract:

    “… photosynthetic carbon fixation by marine phytoplankton leads to formation of ∼45 gigatons of organic carbon per annum, of which 16 gigatons are exported to the ocean interior. Changes in the magnitude of total and export production can strongly influence atmospheric CO2 levels (and hence climate) on geological time scales, as well as set upper bounds for sustainable fisheries harvest. The two fluxes are critically dependent on geophysical processes that determine mixed-layer depth, nutrient fluxes to and within the ocean, and food-web structure. Because the average turnover time of phytoplankton carbon in the ocean is on the order of a week or less, total and export production are extremely sensitive to external forcing and consequently are seldom in steady state….”

    Surprised by those numbers and times?

  68. Hi Nathan,
    Again, thanks for participating in the discussion. Your prespective is valuable.

    While I agree that ECS>5 is unlikely in reality, I’d be reluctant to assign zero prior probability,simply because zero prior probability equates to zero possibility. This complicates comparison or inclusion of your result with results based on other lines of evidence. This is of course the problem with using a uniform Prior–it has to be zero somewhere.

    Did you consider using a thick-tailed Prior? James Annan looked at using a Cauchy prior, but a Pareto or Burr distribution might work better . I am willing to assign a zero probability to ECS<0 at least. You could eventry a log-Cauchy.

  69. A uniform prior doesn’t have to be zero anywhere. You don’t have to truncate a uniform prior, you can let it extend to infinity. In practice, we can’t do that because we only had a small number of runs. As I’ve said, it’s kind of irrelevant, because a dogmatic zero prior is only a problem if it excludes regions of significant likelihood, and ours does not.

    Our uniform prior is even “thicker” than a Cauchy prior, since a Cauchy prior at least tapers off toward infinity, while a uniform prior does not. A bounded uniform prior has a truncated tail, but then, so does a bounded Cauchy prior. For reasons explained above, we can’t consider unbounded priors.

    • Nathan, a uniform prior over all possible values is by definition an improper prior–in effect, you are simply taking the likelihood as a probability distribution, which is incorrect.
      And while a Prior that is zero outside of areas of significant likelihood is not an issue if you are considering a single study, it can be an issue if you are looking to combine estimates in some way.

      One way around this is an Empirical Bayes approach–in effect you cheat and center your prior on the maximum likelihood. Yes, it is technically cheating on Rev. Bayes, but it does at least work out mathematically.

      • Most practicing Bayesians see little wrong with an improper prior as long as it leads to a proper posterior, but there are some philosophically dogmatic Bayesians who will object. Regardless, in our case, you can extend the prior’s upper bound all you want, and you will never get any significant probability above our current posterior.

        And no, priors that are zero outside areas of significant likelihood typically do not give problems even when combining other estimates, because the likelihood will in almost all cases still kill the posterior, unless the likelihood from other estimates gives a huge probability mass outside the prior’s range. This is not the case even for typical “heavy tailed” distributions.

  70. There are lots of graphs which show the swings in temperature and CO2 concentrations over the last several hundred thousand years in which there are temperature swings of ~13C. Obviously the climate is capable of changing temperature quickly — geologically speaking. I don’t understand the technical meaning of ECS. Is that simply the CO2-induced component in the changes in temps? Or is it the sum total of all the changes including feedbacks from a doubling of CO2? If it’s supposed to be the sum total, what has happened to prevent 13C swings in temp like those the planet has seen in the past?

  71. ECS includes climate related feedbacks (but not generally carbon cycle feedbacks). It is often defined to exclude some slow feedbacks like those due to the large glacial ice sheets (which are treated more like boundary conditions). That’s one reason why the temperature change implied by ECS from the glacial-interglacial change in CO2 isn’t as large as the actual glacial-interglacial temperature change.

    • If the the changes in temps due strictly to COx2 is rather small then the bulk of the change in temps from LGM to the present is “left on the table.” The change in CO2 from LGM to the onset of the Industrial Revolution was less than a single doubling. So, was most of the change in temps due to the change in albedo because of ice sheet loss?

      • Hansen has the greenhouse gas forcing between pre-industrial Holocene and the ice age 20,000 years ago at -3 ± 0.5W/m² (with CO₂ at -2.4W/m²), and ice sheets and vegetation at -3.5 ± 1W/m², with the greatest uncertainty due to ice sheet sizes.

      • I have to admit to being lost here. One can look at the temperature graphs over the last several hundred thousand year and see ~13C temperature swings during which the rise in CO2 never quite doubled. I’m lost as to what ECS means in the face of the geological record which is so dramatically different.

        [Response: The large temperature change you note is NOT the global average, it’s for limited regions at the poles, which are subject to considerable polar amplification. The global average temperature change is much less.]

      • Thanks. A grand total of ZERO of those graphs that I’ve looked at over the years show that the temperature change it records is for the arctic region. (The graphs I’m referring to show a huge spike in temps followed by a slow, stair-step decline. This pattern is repeated 4-5 times over the last half million years or so.)

        So none of the preceding discussion about ECS made sense to me.

        Note to world: label your graphs clearly.

      • If you were to look carefully at all those graphs, the ones from the well respected peer reviewed climate science literature, don’t be too surprised if you see acronyms such as GRIP, WAIS, EPICA, etceteras, as they all happen to be taken from ice cores in either Greenland and.or Antarctica.

        There are also ice core data from high altitude glaciers.

        You would need to find a truely worldwide paleo reconstruction (ice cores, sediments, tree rings, etceteras) from 22kya to the present, and I don’t even know if such worldwide paleo reconstructions of 24kyr duration (LGM to present) have ever been done (present paper excluded, as I have not had the time to read it in detail).

        AFAIK all the long term (10kyr and above) paleo reconstructions I’ve ever seen were taken from ice cores using d18 as a proxy for temperature. Although there may be several sediment paleo reconstructions, covering the last 24kyr, that I am unaware of, as I’ve mostly looked at only ice core reconstructions.

  72. I think cutting off the uniform prior at 8.37 seems reasonable. Correct me if I am wrong, but I don’t think any reported value for climate sensitivity has been above 5.5 or 6C, although higher values are in the range of uncertainty of many estimates. Similar to Annan, I think too much emphasis has been placed on the “fat tails” when the reality is that nearly all reported estimates of the value of CS (the the uncertainty range) come in the 2 – 4C range.

    Nathan – While there are obviously caveats with the results of your paper, I think one very important advance was the attempt to use the temperature field of the globe instead of a globally average temperature. Because of the hetergeneous distribution of land mass as well as the significantly hetergeneous change in the distribution of solar insolation due to the Malenkovitch cycles, to me at least, these seems like a much more appropriate approach. The downside, of course, is that the distribution and availabilty of proxy data for many areas of the globe are limited.

    • Bob N., my issue with the use of uniform Priors is that they assign zero probability (=impossible) to a value, and it will stay that way for time immemorial regardless of the evidence. Sensitivity analyses are still in a realm where priors can dominate the analysis.

      • Assigning zero probability to a value irrespective of evidence is not a concern when the evidence itself assigns zero (or essentially zero) probability to a value. The climate sensitivity prior certainly does not dominate our analysis, although they could if we considered additional uncertainties (e.g., if we allowed an uncertain forcing prior to approach zero).

  73. BobN,

    Fritz Moller reported a sensitivity of 10 K in 1961, but his model turned out to be flawed.

  74. Nature Climate Change turned down my article. Without submitting it to peer review. Nothing new enough to be of wide interest to the climate change community, apparently.

    • Sorry to hear it. Publishing can really be a bear.

    • Nature Climate Change probably turns down at least 70% of submissions before even considering sending a manuscript out for review (and likely further rejects more than half of those papers that are sent out for review). I don’t know the stats for NCC, but Nature itself has only an 8% acceptance rate. The standard procedure is to reformat and submit to a less selective journal. There are plenty of good journals out there that aren’t Nature.

  75. Those who are into flushing their nasal passages with their favorite hot beverage might want to take a look at the mind-blowing helping of incompetence that Anthony Watts just served up over at WTFWT (linky here:

    Priceless Climategate email 682: Tom Wigley tells Michael Mann that his son did a tree ring science fair project (using trees behind NCAR) that invalidated the centerpiece of Mann’s work:

    ‘A few years back, my son Eirik did a tree ring science fair project using trees behind NCAR. He found that widths correlated with both temp and precip. However, temp and precip also correlate. There is much other evidence that it is precip that is the driver, and that the temp/width correlation arises via the temp/precip correlation’

    Not quite up there with Watts’ ground-breaking histogram analysis of a couple of years ago, but pretty darned close…

    • Caerbannogg, Hmm, what does it say that you are a connoisseuer of the work of Tony “Micro” Watts? You know, it’s a little like having seen every movie Ed Wood ever made.

    • What’s funny is that they were critiquing the Soon and Baliunas 2003 paper in that exchange, and SB03’s uncritical linkage of precipitation as a temperature proxy. Mann says at one point:

      “Unfortunately, that’s not the task at hand. SB03 have no appreciation whatsoever for these sorts of subtle, legitimate considerations, which involve thinking in a much higher sphere than the one they are thinking in, and certainly, the one that they are playing to. Their logic is much more basic, and immensely less reasonable, than anything we’re talking about here. Their logic, in essence, literally EQUATES hydroclimatic and temperature anomalies, since they hold that the existence of a large extreme in precipitation/drought in a particular region is as good as evidence of anomalous warmth, in support of the proposition of e.g. a “medieval warm period”. So, in a very roundabout way, what I’m saying is, lets definitely not give these bozos more credit than they deserve!”

      It is quote apparent that Mann and Wigley were aware of the uncertainties and limitations involved, and that it was S & B who were not (or chose to ignore them if they did).

      Also, more evidence that the so-called “team” were concerned with the poor quality of the SB03 paper, and not any perceived scientific threat to their position:

      “…but most of all we really have to do, in as simple terms as possible, is explain why the SB03 stuff is so fundmentally flawed.”

      • What’s funny is that they were critiquing the Soon and Baliunas 2003 paper in that exchange, and SB03′s uncritical linkage of precipitation as a temperature proxy.

        Actually, it’s even worse than that. S&B equate high *and* low precipitation anomalies as markers for warm periods during the MWP, and then they turn around and use high/low precipitation anomalies as markers for *cool* periods during the LIA!

        Here’s the relevant snippet from SB2003:

        Anomaly is simply defined as a period of more than
        50 yr of sustained warmth, wetness or dryness, within
        the stipulated interval of the Medieval Warm Period, or
        a 50 yr or longer period of cold, dryness or wetness
        within the stipulated Little Ice Age.

        And to think that *this* paper was actually waved around on the US Senate floor as evidence that global-warming is a fraud!

        F%@!ing pathetic!

    • Gavin's Pussycat

      Yep. There’s a reason temperature reconstructions are made based on trees at the tree line and not behind the NCAR building.

  76. Anybody have any experience working with hyperexponential distributions–I’m looking at a special case where the “time to failure” varies continuously from 0 to infinity according to a well behaved (but possibly thick tailed) probability distribution and am trying to get a closed expression for the central moments. Anyone? Beuhler?

    • Sorry, that’s outside my area. Maybe the kid who did the tree ring analysis could help you out.

      • The folks at Watts’ would believe anything the kid said over what anything Tamino (or anyone else here) would say.

  77. Tamino, in a post some time ago, you stated that the PDO doesn’t pass the tests for true periodicity. Can you elaborate on this? I’m running into people who are claiming that the PDO has a 60-year cycle and this can completely explain our temp trends over the last century or so. I’m interested in nailing down the claim that there is a 60-year cycle. I’m not sure how it’s possible to make this claim, since as far as I know PDO data extends back only to 1900, which wouldn’t even make two full cycles.

    So–how is periodicity tested, and in what way does the PDO data fail those tests?

    [Response: You can begin with this

    and this

    There’s also this about PDO:

    You may then want to ask further questions.]

  78. I finally got around to seeing if AIC could be used to determine whether Arctic Sea Ice extent is decreasing linearly or (at least) quadratically.

    To do this, I constructed 2 generalized linear models:

    Model 1 assumed the expected value (mean) for sea ice minimum was changing linearly with time with (for lack of a better model) gaussian errors about the mean.

    Model 2 also assumed gaussian errors, but assumed that the mean followed a quadratic trend.

    The results strongly favored a quadratic trend. I haven’t looked at higher terms, but I suspect there isn’t enough data to discern a cubic or quartic term. I’ll check tonight and see when I predict the Arctic to be free based on the quadratic model.

    I was kind of surprised this worked as well as it did.

  79. When is the next Congressional climate hearing?