The Rise and Fall of Judith Curry

We’ve been looking closely at the written testimony from Judith Curry before a recent meeting of the Environment and Public Works committee of the U.S. Senate. What we’ve seen so far argues against relying on Curry to give accurate and relevant information.


One of her main evidences that the IPCC AR5 (5th assessment report) is wrong about expressing greater confidence than the AR4, is her discussion of sea level. We’ve already looked at a small part of her discussion, a statement so misleading that I was amazed she would actually say it.

But her main argument regarding sea level rise is this:


It is seen that the rate of rise during 1930-1950 was comparable to, if not larger than, the value in recent years. Hence the data does not seem to support the IPCC’s conclusion of a substantial contribution from anthropogenic forcings to the global mean sea level rise since the 1970s.

She’s referring to (and reproduces) this graph from the IPCC AR5:

Fig3_14

It shows trend estimates (i.e., the rate of sea level rise) over time, based on linear regression of 18-year time spans from three global sea level data sets (reconstructions based on tide gauge data), and the trend over the last 18 years based on satellite data (labelled “altimeter”), which at the time of writing only covered 18 years (which is why, I believe, they chose 18 years as their time scale). The times which are plotted are the beginning of each 18-year time span.

I have three complaints about Curry’s argument:

  • 1) Curry fails to mention why IPCC AR5 shows this graph or what they say about it;
  • 2) Even if true, Curry draws the wrong (and unjustified) conclusion;
  • 3) The data show strong evidence of acceleration in the 20th century.

    Let’s take each in turn.

    1) Curry fails to mention why IPCC AR5 shows this graph or what they say about it

    Here’s what they say:


    A long time-scale is needed because significant multidecadal variability appears in numerous tide gauge records during the 20th century (Holgate, 2007; Woodworth et al., 2009; Mitchum et al., 2010; Woodworth et al., 2011; Chambers et al., 2012). The multidecadal variability is marked by an increasing trend starting in 1910–1920, a downward trend (i.e., leveling of sea level if a long-term trend is not removed) starting around 1950, and an increasing trend starting around 1980. The pattern can be seen in New York, Mumbai, and Fremantle records, for instance (Figure 3.12), as well as 14 other gauges representing all ocean basins (Chambers et al., 2012), and in all reconstructions (Figure 3.14). It is also seen in an analysis of upper 400 m temperature (Gouretski et al., 2012; Section 3.3.2). Although the calculations of 18-year rates of GMSL rise based on the different reconstruction methods disagree by as much as 2 mm yr–1 before 1950 and on details of the variability (Figure 3.14), all do indicate 18-year trends that were significantly higher than the 20th century average at certain times (1920–1950, 1990–present) and lower at other periods (1910–1920, 1955–1980), likely related to multidecadal variability. Several studies have suggested these variations may be linked to climate fluctuations like the Atlantic Multidecadal Oscillation (AMO) and/or Pacific Decadal Oscillation (PDO, Box 2.5) (Holgate, 2007; Jevrejeva et al., 2008; Chambers et al., 2012), but these results are not conclusive.

    While technically correct that these multidecadal changes represent acceleration/deceleration of sea level, they should not be interpreted as change in the longer-term rate of sea level rise, as a time series longer than the variability is required to detect those trends. Using data extending from 1900 to after 2000, the quadratic term computed from both individual tide gauge records and GMSL reconstructions is significantly positive (Jevrejeva et al., 2008; Church and White, 2011; Rahmstorf and Vermeer, 2011; Woodworth et al., 2011). Church and White (2006) report that the estimated acceleration term in GMSL (twice the quadratic parameter) is 0.009 [0.006 to 0.012] mm yr-2 (1 standard deviation) from 1880 to 2009, which is consistent with the other published estimates (e.g., Jevrejeva et al., 2008; Woodworth et al., 2009) that use records longer than 100 years. Chambers et al. (2012) find that modelling a period near 60 years removes much of the multidecadal variability of the 20th century in the tide gauge reconstruction time series. When a 60-year oscillation is modeled along with an acceleration term, the estimated acceleration in GMSL since 1900 ranges from: 0.000 [–0.002 to 0.002] mm yr–2 in the Ray and Douglas (2011) record, 0.013 [0.007 to 0.019] mm yr–2 in the Jevrejeva et al. (2008) record, and 0.012 [0.009 to 0.015] mm yr–2 in the Church and White (2011) record. Thus, while there is more disagreement on the value of a 20th century acceleration in GMSL when accounting for multi-decadal fluctuations, two out of three records still indicate a significant positive value. The trend in GMSL observed since 1993, however, is not significantly larger than the estimate of 18-year trends in previous decades (e.g., 1920-1950).

    The whole point of this discussion, and of their figure 3.14, is to show that if you want to estimate changes in the rate of sea level rise which are climatically relevant, “A long time-scale is needed“. The IPCC report didn’t ignore or downplay multi-decadal variability (although Curry seems to think that it does about everything, not just sea level rise). It doesn’t pretend that multidecadal variability isn’t natural variation, in fact it gives at least one possible root cause which is “natural.” The IPCC report didn’t ignore either the existence or the possible causes of multidecadal variability, they decided instead to deal with it. I think the one who really needs to “deal with it” is Judith Curry.

    The main point of the IPCC discussion is that those short-term, decadal to multi-decadal, possibly natural variations should not be interpreted as change in the longer-term rate of sea level rise, so don’t use them to draw conclusions about climatically induced acceleration or its absence. I thought their statement was pretty clear. Apparently Judith Curry either didn’t get it, or didn’t want to, because she has done exactly what the IPCC report warns you should not do. My opinion: classic Curry.

    As for the truth or falsehood of the claim that “the rate of rise during 1930-1950 was comparable to, if not larger than, the value in recent years,” let’s take a look at this graph of the rate of sea level rise:

    Rahmstorf_fig3

    The thick red line shows the estimated rate based on linear regression applied to 10-year time spans. The dashed gray line shows the long-term rate, i.e. the one that’s relevant to climate change. Note that the decadal rate in the 1940s is comparable to, if not higher than, the most recent rate, but for the long-term rate it is not.

    In this case, it is most certainly the long-term rate which is correct while all those fluctuations are just plain wrong.

    You might be thinking, “Who died and made Tamino the arbiter of what’s “right” and “wrong” in estimates of the rate of sea level rise?” Or maybe “Tamino is just calling the fluctuations “wrong” because he doesn’t like them!”

    This is one of those cases in which I can be sure, because the data which produce this figure are artificial data. Therefore the answer is already known — with certainty. The dashed gray line is the right answer, the true sea level rise signal. The thick red line results from the fact that Rahmstorf et al. (2012, Clim. Dyn. 39, 861–875, DOI 10.1007/s00382-011-1226-7) added noise to the artificial signal, noise which in fact emulates that found in the Church & White data set.

    Rahmstorf et al. also point out that much of the noise in sea level data isn’t just “ordinary” noise. It’s not entirely (or perhaps even predominantly) due to “natural variability,” it’s in large part due to coverage bias. Since it’s bias and not just stationary noise, that means that even the “error bars” we compute by treating it as noise can exclude the true value. They also point out that satellite altimetry shows less fluctuation than reconstructions based on tide gauge data, arguing that some if not much of the observed decadal and multidecadal variability may be an expression of noise, not a reflection of signal.

    It’s also worth mentioning that of the three global data sets, the one which shows the highest rate in the first half of the 20th century is the Jevrejeva et al. data, which uses an unusual method of averaging tide gauge records that ends up giving the northern hemisphere oceans greater statistical weight than the southern hemisphere oceans in spite of the fact that the area of the southern hemisphere oceans is much greater than that of the northern hemisphere oceans. That’s exactly the kind of treatment which can lead to coverage bias, not just extra noise.

    There’s another aspect which should be mentioned. Computing trend rates based on sliding 18-year windows is the application of a “moving velocity filter” to the data. That means it really represents the estimated rate at the mid-point in time of the observation window. Like with moving averages, in most cases we insist that the window is complete, so our very first estimate applies to half a window width later than the start of the data while the final estimate applies to half a window width before the end of the data. We lose half a window width at each end of the time series.

    Here, for example, are the 18-year trends (moving velocities) of the Church & White data, with times plotted being the midpoints of the individual windows:

    CW18a

    Note that it barely goes past the year 2000, but even at that the final value is still the highest — although not by much so the result is consistent with “comparable to”. But, notice also that at the very end the rate has been increasing, so it may well have increased further after 2000. To estimate the rate from the Church & White data all the way up to the year 2010 (when the data end), we need to a better way to estimate the rate than the standard “moving velocity” filter.

    I, and others (Jevrejeva et al., Moore et al., Rahmstorf et al.) have argued that a good way to do that is with nonlinear smoothing. Those other authors have used SSA (singular spectrum analysis) to accomplish this, while I’ve tended to use lowess smoothing and pick out the linear coefficient at each moment as the trend estimate for that moment. Let’s see whether or not this is consistent with the results of the moving-velocity filter at the same time scale (nonlinear smoothing result in red):

    CW18b

    Yes. Yes, it is.

    What does it suggest happened after 2000? This:

    CW18z

    Apparently the rate did keep increasing (in this data set), so the final estimated rate turns out to be the largest in the entire time span, and just about the same as the rate indicated by satellite altimetry.

    Bottom line: even the claim that “the rate of rise during 1930-1950 was comparable to, if not larger than, the value in recent years” is by no means established as surely as Curry believes. Or, in my opinion, as the surely as the IPCC report states.

    2) Even if true, Curry draws the wrong (and unjustified) conclusion

    Suppose for the sake of argument that “the rate of rise during 1930-1950 was comparable to, if not larger than, the value in recent years.” Might even be true. How do you get from that to “Hence the data does not seem to support the IPCC’s conclusion of a substantial contribution from anthropogenic forcings to the global mean sea level rise since the 1970s,” especially if you don’t even mention the rate of rise on climatically relevant time scales? Logic fail.

    3) The data show strong evidence of acceleration in the 20th century.

    Jevrejeva et al. (2008, GRL, 35, L08715, doi:10.1029/2008GL033611) applied SSA nonlinear smoothing to estimate the time variations of sea level rise rate throughout their data set, getting this (the black line is global, the blue line for the northeast Atlantic region):

    Jev_fig3

    I did the same thing using lowess smoothing, getting this:

    Jev60

    Clear result: in addition to multi-decadal variations there is also a consistent increase in the rate of sea level rise throughout the time span. That’s called “acceleration.”

    Rahmstorf et al. did a similar analysis on the Church & White 2006 data, the Church & White 2011 data, and the Jevrejeva data (top: Church & White 2006; center: Church & White 2011; bottom: Jevrejeva et al. 2008). The colored lines are what we’re interested in, the estimated rise rates:

    Rahmstorf_fig6

    In all three cases there is a consistent increase in the rate (a.k.a. “acceleration”) superimposed on multidecadal variation, and in all three cases the estimated rate at the end is the highest of all.

    My Opinion

    Here’s how I see it: Judith Curry really did nothing more nor less than to scour the IPCC AR5 looking for stuff she could claim weakens the case for dangerous man-made climate change. In so doing, she was willing to ignore what the IPCC report actually says in favor of her preferred interpretation of things. She demonstrated more than once that she doesn’t have sufficient knowledge of what the data have to say, or of what the peer-reviewed literature says, to know what she’s talking about.

    It’s rather disappointing, really, because if you’re determined to find fault that’s usually ridiculously easy in any report as lengthy as the IPCC AR5, but she still managed to botch the job. Dismally. She also utterly failed to mention, perhaps even to notice, anything in the IPCC report which strengthens the case. Seriously — are we actually to believe that there isn’t anything like that at all?

    I also expect that regarding her testimony, Judith Curry will staunchly refuse to learn anything from the many critics (I’m far from the only one) who have found serious faults in her testimony.

    But, that’s just my opinion.

  • 67 responses to “The Rise and Fall of Judith Curry

    1. The climate sensitivity discussion is even more ridiculous. She uses the following graph to argue that a growing divergence is existing between obs and modeling approaches to the sensitivity, and that the evidence is pointing to a lowwer sensitivity

      Her focus of this graph is that 12/20 lines overlap to some extent with the left side of the grey shaded region of CS < 1.5 C.

    2. AR5 (1993-2010)
      Thermal expansion 1.1
      Glaciers and ice caps 0.76
      Greenland ice sheet 0.33
      Antarctic ice sheet 0.27
      Land water storage 0.38
      Sum 2.8
      Observed sea level rise 3.2

      Do estimates of this sort exist for periods between 1900 and 1950?

      • Not really, or not with anything approaching similar accuracy. They’re bottom-up estimates derived from physical measurements of the relevant systems. Thermal expansion is directly calculated from ocean heat content estimates, glaciers and ice sheets from tracking mass loss. There aren’t nearly enough ocean measurements available between 1900 and 1950. There are some glacier and Greenland ice sheet mass change estimates but they don’t appear to be at all consistent with each other, and no data for Antarctic Ice Sheets.

        Gregory et al. 2013 do attempt such an estimate for the whole period 1900-present, or at least see how far available data takes them, but they have to use model-generated thermosteric sea level change for the thermal component and accept large uncertainties for ice mass balance. For the 1900-1950 period their bottom-up accountancy predicts SLR between about 0.4 and 1.8mm/yr, with observations near the high end of that range. The 1900-1950 expected thermal component from modelling looks to be about 0.3 +/-0.3mm/yr.

        • OHC changes (thermal expansion) accounts for about 1/3 of the total sea level rise. What did this balance look like circa 1930′s to 1950′s? Presumably the land water storage and glacier melt was smaller, so the thermal expansion was more dominant in this early period. Which suggests that ocean heat content was greater in this early period than in the current period, and cannot be attributed to AGW. … – J. Curry

          Thanks Paul S. I looked and looked and could not find anything.

          She makes a big deal out of the rise in the 30’s, and the mid-century cooling. Never misses a chance.

    3. So what’s the best strategy for mitigating the damage she does? It won’t do to treat her as just another scientist acting in good faith who happens to have a different view.

    4. Occluded Brain

      Tamino: Why don’t you try doing a thirty year rolling linear regression of temperature anomlies (easily done using Excel’s slope function) and plot the results of the rate of change of temperature. Interesting results.

    5. There is so much that seems wrong with Curry’s latest testimony, thanks for giving us more analysis of it.

      Ever since I stumbled into this fake debate about the scientific consensus regarding climate change I have been struck again and again by the poverty of the arguments put forward by those who say AGW isn’t anything to worry about. Time and time again I find myself asking “Is that all you’ve got?”

      Curry’s latest testimony is a case in point. She confronts the mountain of evidence presented by the IPCC with, among other things, the stadium wave. I would have thought that Wyatt and Curry’s hypothesized stadium wave is, at best, rather speculative at this stage, and I am surprised to see Curry include it in her testimony.

    6. deminthon

      It won’t do to treat her as just another scientist acting in good faith who happens to have a different view.

      Why not? What’s your alternative and why would it be more effective?

      • I’d also like to know the answer to those questions.

      • It’s difficult isn’t it? It’s clear enough to those who can follow the science that she not acting in good faith, but without scientific training or proper skepticism, she looks sort of OK to the credulous and those who wish the problem would just go away if they indulge in magic thinking.

      • skeptictmac57

        My casual observation is that many people who use to believe that Curry was previously acting in good faith, now have serious doubts,and have come to view her more as an advocate against AGW driven by confirmation bias,and in a fight with her own cognitive dissonance.
        That would be why not.

      • Well, a possible alternative is that she is not acting in good faith. And that would be much more effective way of looking at her, because it provides a much simpler explanation of her observed behaviour.

      • I’m not sure what good faith means here. It seems to me that Curry sincerely believes in her stated position on climate change, although it’s not possible to know for sure. It also seems that her position does not fit the evidence. As a layperson it’s difficult for me to make this judgment, but I seems to me that the IPCC’s overall conclusions are backed up by a huge amount of evidence, whereas Curry’s overall conclusions do not seem to be evidence based.

        [Response: I too think she believes in her stated position. But when she makes an argument like “sea level has been rising for thousands of years,” I suspect bad faith. Either she knows that sea level rise rate for the last many thousand years is nowhere near the modern rate so this argument is nothing but misleading, in which case it’s deliberate mendacity — or she doesn’t, in which case we might call that *culpable* ignorance. She was, after all, apparently called as (and touted as) an “expert.” ]

        • The point I was trying to make wasn’t whether or nor Judith Curry is acting in good faith. It’s whether countering her perceived errors is more effective by presuming good faith.

          I think I’d probably go even further; even if you are convinced she is acting in bad faith, countering her errors will be more effective if you respond as though they are made in good faith.

          The story that Judith is selling is that she is a lone honest voice standing up to a corrupt establishment. To see that establishment ganging up on her would not weaken her position, but rather strengthen it.

        • That’s a strategy I often use on news sites–I treat most comments as an occasion to proffer correct information. It may be received in that spirit, or it may not, but at least it is then out there.

        • “I’m not sure what good faith means here. It seems to me that Curry sincerely believes in her stated position on climate change, although it’s not possible to know for sure.”

          John’s point is that it is extremely difficult to explain some of her statements if she is acting in good faith. Curry isn’t merely mistaken, she cherry picks like mad, which is fundamentally bad faith behavior. And some of her mischaracterizations are either intentional or require a level of incompetence that itself must be willful.

        • I agree that it’s either ignorance or mendacity. I think the underlying cause is a huge amount of confirmation bias, as skeptictmac57 suggests, and that’s why there’s such a breathtaking and infuriating disconnect between the evidence and Curry’s “expert” testimony. Thank goodness there are real experts around to debunk her and hopefully reduce the damage she does.

      • “Why not?”

        Because it gives arbitrary falsehoods the same standing in public policy discussions as science. Thus, she is presented to Congress as an expert and so now we have the appearance of dueling experts.

        “What’s your alternative and why would it be more effective?”

        Eh? MY question was

        “So what’s the best strategy for mitigating the damage she does?”

        • OK, so to answer your question then, IMO the best strategy is to exacty to treat her as just another scientist acting in good faith who happens to have a different view.

          Picking her out as a special case in some way would merely help strengthen her narrative of lone honest voice, and Congress will always be able to find someone with the right views to play duelling expert.

        • So would you also treat Steve Milloy, Anthony Watts, Marc Morano, and James Delingpole as people acting in good faith who happen to have a different view? How about Mark Steyn? It would be a mistake to pick Judith Curry out as a special case from among them because she, like they, is not doing science. Noting that is not to pick her out, it’s to tell the truth. The worst strategy is to act out of fear of the creation or strengthening of a narrative that has already been created … that’s what Curry’s been busy at, culminating with her testimony. This fear is quite similar to Tamsin Edwards cautioning scientists about speaking out and blaming them for the degree AGW denial (yes, she did that), or how people blame Richard Dawkins for Creationism. The Obama administration followed the same awful strategy of trying not to look too socialist. The fact is that the narratives are put out there regardless. The best strategy is to tell the truth. Curry is not just another scientist acting in good faith who happens to have a different view, she is someone who is searching IPCC documents for things to use against it without engaging in any of the processes that scientists engage in when they are doing science. What Curry did when she spoke into that microphones isn’t science, any more than when James Inhofe speaks into his microphone.

        • P.S. You not only didn’t answer my question, you totally ignored what I wrote: ” it gives arbitrary falsehoods the same standing in public policy discussions as science”. Treating Curry as just another scientist acting in good faith who happens to have a different view does nothing to mitigate the damage she does; rather it does the opposite. Fortunately, Tamino and others aren’t doing that … they are pointing out that she cherry picks, misrepresents, elides, ignores the science, ignores and contradicts her own work on sea ice, and on and on … as Mike Mann said, her testimony was “antiscience”. (She suggests that is no different than what Mark Steyn said of Mann, but if she thinks so she’s free to sue and go through discovery — the difference is that the charge against Mann is false). That’s all well and good, but it’s really not what I was asking for. The question is, what can be done to mitigate the damage? We know what Exxon-Mobil, the Koch brothers, Frank Luntz, et. al. to mitigate the threats they see to their interests. On the other side is what … blog posts?

        • deminthon,

          I think we just disagree. I think that a calm, rational scientific approach pointing out the errors in Dr Curry’s testimony as would happen if Dessler had made a mistake is the best way to counteract those errors.

          I think you are proposing a more aggressive attack centering on intent, although I’m not sure, to be honest.

          Personally, I think that plays into the hands of Curry and her political supporters in providing a pig-wrestling spectacle where everyone is dragged down to the same level.

          If, on the other hand, many scientists (John Nielson Gammon springs to mind as a good example) write calm scientific take downs, ideally in the MSM then that, in the long run, would be more effective.

          I’ll leave it to you from here – I don’t want to distract further from Tamino’s excellent analysis.

        • It has been my experience that an assumption of innocence, followed by a vicious technical slapdown is the most effective approach–plus it really pisses off the target.

        • Well of course one can “simply disagree” and then restate one’s position, ignoring the arguments and points of others and making no argument for one’s position. Your position is a common one that demonstrably leads to bad results when there are, in fact, people acting in bad faith, and for which people fail to give any supporting argument … some people seem to simply take it on faith.

        • I doubt that that’s actually an “experience” of any sort, snarkrakes, it’s just an unsupported claim. It isn’t possible to have such an “experience” of the relative effectiveness of two strategies. Perhaps by “effectiveness” you mean “a feeling of satisfaction”. However, feeling good about posting factual smackdowns in comments on blogs is not “effective” in the sense that I am concerned with … the effect on public policy.

        • I’ve read now read John Nielsen Gammon’s piece of Curry’s logic and I think it’s pretty poor. I also think it’s irrelevant because almost no one will read it … which is probably a good thing because it strongly reinforces the notion that Curry is reliable.

    7. Is it not sweet to see Tamino back in full, raging form?

    8. You realize, of course, that the acceleration after 2000 shown in with your extension of Church and White’s data, is perfectly consistent (ie the science speak term for a slam dunk) with excess heat going into the ocean, where it expands the volume and raises sea level. Very nice.

      • Darn. Eli demonstrates that sometimes the hare does get there first.

        I was going to point out the same thing.

    9. Horatio Algeranon

      Off topic, of course, but “Rise and Fall” rang a bell

      “The Rise and Fall”
      — by Horatio Algeranon

      RomanM Pyres rise and fall
      “Stadium Waves” applaud it all
      Gladiators in the ring
      Say and do most anything

      • David B. Benson

        Encore, Horatio, encore.

        Doggerel them to death!

        • Horatio Algeranon

          You asked for it

          “Rambler Waves of Brain”
          – by Horatio Algeranon

          Denial is a stadium wave
          Passing through a crowd
          An undulating rant and rave
          Where reason ain’t allowed

          Back and forth and in and out
          Like wheat-fields in the breeze
          Rambler waves without a doubt
          Brain raves if you please

        • Horatio Algeranon

          “RomanM Pyres”
          — by Horatio Algeranon

          Lots of heat, but little light
          Is what the “skeptics” shed
          RomanM Pyres, very bright
          But lookat where that led

    10. “Why don’t you try doing a thirty year rolling linear regression of temperature anomlies”

      Because you won’t get a meaningful result for the last 15 years of data.

    11. > Occluded Brain
      > “thirty year rolling linear regression of temperature anomlies”

      Google the quoted string.

    12. This is the serial response from people without a leg to stand on, these days. Pick a cherry, then beat on it hoping to get some joy. Failed thinking at its worst.

    13. The analysis by Rahmstorf et al (2011) discussed in the post may merit a bit more description. The term “coverage bias” for instance, is left a little vague.
      The paper itself is accessible here. I think their Figure 2 is quite a useful illustration of the way that noisy tidal-gauge sea level records (in their Figure 1) result in even noisier rate-of-change plots. The rate-of-change wobbles of six tidal-gauge-based records (plus the satellite record) presented in Figure 2 mainly fail to line up and resemble a child’s crayon scribble.
      (And one of those six records is Holgate & Woodworth (2004) which also features (without creating a ‘scribble’) in Holgate (2007). Now Holgate (2007) is much loved by denialists because it shows early 20th century SLR higher than late 20th century SLR. If Holgate’s data in his Figure 3 had been plotted like Rahmstorf et al’ Figure 2, Holgate’s results may have been questioned a bit more rigorously.)

    14. Thanks for listening. Nice job.

    15. Whatever you say about Prof Curry she has raised some excellent points which have not been rebutted. [Response: Stay tuned, there’s more to come.] The continuing divergence between warming predicted by models, or ensembles, of models has been noted by too many reputable statisticians and nothing I’ve seen here can explain that divergence. I’m hoping someone here can put forward something convincing.

      • nothing I’ve seen here

        This blog isn’t the sum of all human knowledge.

        There are lots of consistent explanations in the published literature: (1) the slowed upward trend in the major indices still lies within the error bars [summarized in IPCC 2013]; (2) the upward trend in the major indices is slowed by natural variation whose effect we can compute (particularly ENSO variations), and which if they weren’t happening we’d expect to be hitting the predicted trend [Foster & Rahmstorff 2011]; (3) the model-predicted trend includes areas that the major indices underreport, and it turns out those areas (primarily the Arctic) have been heating up very fast recently [Cowtan & Way 2013].

      • Hi John. Suppose I build a model correct in all major elements of physics affecting climate but incorrect in that it underestimates real-world energy transference to the deep ocean. Such a model would over-estimate transient climate response at the earth’s surface (while still providing an accurate estimate of equilibrium climate sensitivity) and its estimate of the time required to detect significant surface temperate trends would prove too short. Motivated reasoners would tout this as an epic fail despite the fact that ongoing investigation (the nature of science) could dramatically improve the model’s accuracy.

        Now, there is a wide waste of space between plausibility and truth. Further, given the points raised by numerobis there may be nothing to explain at all. The supposed deviation could be entirely due to short-term quantifiable natural variability (http://contextearth.com/2014/01/22/projection-training-intervals-for-csalt-model/) and yet Dr Curry writes Jan 20, 2014:

        “If the 20 year threshold is reached for the pause, this will lead inescapably to the conclusion that the climate model sensitivity to CO2 is too large.”

        Inescapably!? A couple of ill-timed volcanic eruptions could lower temperatures for the next five years or more on their own! Dr Curry’s blog entries are replete with such nonsense.

        Long-term projections from short-term results are unlikely to yield robust conclusions. Consider https://tamino.wordpress.com/2013/10/24/fire-down-below/.

    16. Chris O'Neill

      nothing I’ve seen here can explain that divergence

      Is there any need to explain something (the divergence) that is statistically insignificant?

    17. Every time I see someone talking about divergence of models, I keep wondering “how close do you expect them to be”. The 2013 central estimate for the CMIP 5 model is set is 0.0125 C higher than the best estimate (CW2013) of the GMST. It’s 0.2 lower than the 1998 super El Nino. So what’s wrong with the models? Are you confusing rhetoric with science?

      Besides, surely you’re aware that models don’t predict ENSO. They treat ENSO as something that averages out over the long haul. So IF you suddenly demand that models should have predicted the number and order of El Ninos over the past 10 years, you’ve unilaterally changed the rules of the game. When you throw Kosaka and Xie into the mix it’s really hard to come to any other conclusion than what we had before: a natural variation in ENSO is masking the underlying trend.

      John, does you knowledge of the “divergence” extend any further than what you think Professor Curry said, or have you looked at any evidence yourself? On this board, if you haven’t looked at evidence yourself and formed your own conclusions and the ability to support them…why should we care what you think?

      Me, I’m looking forward to Tamino’s analysis.

    18. oops that’s 0.02 lower than 1998!

    19. Chris nailed the main point. You need 30 years to pick a climate trend out of the data. That people keep focusing on the past 10-15 years only proves that humans–and AGW deniers in particular–have the attention span of a rhesus monkey. John B., you need to google “sample size.”

    20. Is Curry a practising professor? If so, how do students at her university deal with her?

      • Yes, at Georgia Tech. I have no idea what her interaction with students is like, though presumably at the undergrad level at least her, er, particular take on things wouldn’t come into the curriculum much.

    21. I did look to see who else this US Senate Committee felt was worth asking to give evidence. As a Brit, they are mainly names I don’t know. Peeking at the evidence given by the “expert” following Curry, it kicks off just as unreliably. Kathleen Hartnett White Director of the Armstrong Center for Energy & the Environment tells the committee that the US is doing loads better than the EU in reducing CO2 emissions. In 2012 US managed a 3.7% reduction while the EU with all its carbon trading & legislation only managed 1.8%.
      Gee! That’s brilliant Kathleeen! That kinda makes up for the poor showin the US gave in 2011. D’ ya reckon, yoo bein’ an expurt an all, that by 2032 the US’ll have made up the rest o’ the 25% or so its fallen behind in reducin seeyootoo since 1990?

      So who decides which experts are called? Are Curry & White giving ‘balance’? Or is it political parties or individual politicians of different stripe calling their own expert?

      [Response: It’s political parties and individual politicians calling their own “experts” to give the opinion they want.]

      • The argument of John Nielsen-Gammon (for 100% of 1950-2010 GW is AGW) is surely more pedantry than science. I don’t think it is healthy going down such a route. If you start discussing the Wyatt Stadium Wave Oscillation in such a way you are giving it far more credence than it deserves.

        I note from the links to the blog-mom’s site provided by John Nielson-Gammon that there was verbal testimony which has Curry arguing not for a “hiatus” lasting 20+ years, but 30+ years, lasting into the 2030’s having already been “observed over the past 15+ years.”
        This extended “hiatus” appears to be the basis for her verbal assertion that – “Attempts to modify the climate through reducing CO2 emissions may turn out to be futile.” Indeed, all resistance is futile, Mr Bond, because “Good judgment requires recognizing that climate change is characterized by conditions of deep uncertainty. Robust policy options that can be justified by associated policy reasons whether or not anthropogenic climate change is dangerous avoids the hubris of pretending to know what will happen with the 21st century climate.”

        Curry’s testimony is not at all well worded. It’s as though it was written in a hurry. But Curry tells her blogamites in the days before the hearing that she was “very bizzy preparing.” (bizzy = busy?). So perhaps the problem was not her message but in framing her message with enough obfuscation.
        But I think enough of the message pokes out to allow her application of “anti-science” to be publicly documented and rebutted.
        And this is what Curry wants done because Michael Mann has gone and twittered to mean “science vs anti-science = Dessler vs Curry”. Curry’s resulting challenge appears to be demanding of Mike Mann that he “document and rebut” all ‘factual inaccuracies’ and ‘unsupported conclusions’ within her testimony. (Curry uses the pronoun “any” in the sense of “every.”) Now that is surely asking for an unreasonably large amount ‘documenting’ and ‘rebuttal’.

        • Horatio Algeranon

          “Thirty Year Hiatus”
          — by Horatio Algeranon

          “A thirty year hiatus
          Is what we’re looking at
          And thirty more hyenas
          To laugh about my stat”

        • Horatio Algeranon

          “Hangover Follows Hiatus”
          –by Horatio Algeranon

          Hangover follows hiatus
          Especially with the warming
          Hiatus might delay this
          But won’t prevent the storming

    22. > The argument of John Nielsen-Gammon
      > (for 100% of 1950-2010 GW is AGW)
      > is surely more pedantry than science.

      Say what? I thought it was at least 100 percent, probably more than that — anthropogenic warming added to the expected normal slow cooling trend that typically occurs after the brief rapid warm spike ending an ice age.

    23. > [… political parties and individual politicians
      > calling their own “experts” to give the opinion they want.]

      And more of that:

      http://www.theguardian.com/environment/blog/2014/jan/27/ipcc-hearing-uk-us-climate-change

    24. Michael Sweet

      Al Rodgers: John Nielsen-Gammon quotes the IPCC report as saying:

      “The best estimate of the human-induced contribution to warming is similar to the observed warming over this period.”

      that sounds like 100% to me. Can you support your claim of pedantry beyond your personal feelings? Solar is down.

      • Michael Sweet,
        Perhaps “pedantry” is not the right word to use and to be clear – my complaint is how he argues not what he argues for. My complaint is that his challenge to Curry is based on the fine detail of Curry’s own thesis rather than addressing it properly in the round. (Is there a word for that?)

        The crux of John Nielsen-Gammon’s position is:-
        “But here’s the thing. If, over 60 years, natural variability averages out to zero, it doesn’t matter how strong natural variability is compared to man-made climate change, what’s left over is the man-made part”
        This argument has not proved a killer blow – the blog-mom is still at it although that was to be expected.
        But my cause for complaint is what John Nielsen-Gammon is actually using here within his argument. To argue that Curry is wrong by throwing the 60-year natural variability of Wyatt’s Unified Wave Theory into the pot (as he does), there is then the tacit acknowledgement that Wyatt’s Unified Wave Theory has merit. And with the global warming being discussed in terms of 1950-2010, suddenly the words of John Nielsen-Gammon are sliding off to support (amongst others) one of the masters of ‘error-packing’ Syun-Ichi Akasofu.

        John Nielsen-Gammon is correct that Curry’s testimony rests on the Wyatt Unified Wave Theory but he fails to make clear that such a theory is a bonkers speculation that’s as crazy as its acronym suggests. And based solely on that theory, Curry’s message is plain (although her own wording cloud that message):-
        ‘Do nothing specifically to mitigate rising GHG forcings. Trust me. I am a climatologist (sic) and I see no problem with a man-made climate forcing of +3.4 Wm^-2 which is rising at +0.45 Wm^-2/decade.’

        • Horatio Algeranon

          “But here’s the thing. If, over 60 years, natural variability averages out to zero, it doesn’t matter how strong natural variability is compared to man-made climate change, what’s left over is the man-made part”

          Whether one acknowledges (or even realizes) it or not, once one accepts the idea of a “60 year cycle”, one also tacitly accepts that much (if not all) of the observed warming over 30 years might have been due to the positive sloping part of such a cycle.

          Of course, natural variability averages out over a long enough period, but that really misses the point. If the purported “60 year cycle” had a large enough magnitude, there would be no need to even invoke GHGs as the cause of the observed temperature increase.

          Unless I am mistaken, that is Curry’s argument in a nutshell, but the whole thing is on ground that is as every bit as shaky as a passing “stadium wave”, which actually seems to be a very apt description of Curry’s theory.

          A ‘stadium wave” is a completely artificial, orchestrated phenomenon that has no underlying natural cause (a large group of people acting in perfect concert to make it happen is hardly natural) to say nothing of a plausible mechanism in the case of the climate.

          As Feynman once noted, science is about what is probable, not about what is possible.

          Is a climate stadium wave possible? Perhaps. So are aliens from the Andromeda Galaxy manipulating climate from their space ship above, but that does not mean we should base climate policy on their existence.

        • It’s a common rhetorical device to give the opponent all their axioms and then proceed to destroy their conclusion anyway, showing that the opponent is not making a rational argument. That’s what I read Neilsen-Gammon as doing. Then, with the listener suitably impressed that the opponent is spouting nonsense, you can go after the axioms themselves.

        • The first version of this that showed up on Real Climate was the post by Kyle Swanson of Tsonis & Swanson. You can call it bonkers if you want, but they called they called the pause, and Tsonis and Curry are claiming it is going to last until 2030.

          Peel away the layers of their mechanism, and I think what you’ll find is ENSO.

          There is no claim that natural variation was the entire cause of increase.

        • JCH.
          Assuming I have the right paper, yes Tsonis of Swanson & Tsonis (2009) is signed up to Wyatt’s Stadium Wave Theory. Indeed, he was one of those who encouraged Wyatt to develop it.
          I am not up to speed with the 2009 paper at present but I do not see the coverage by Swanson @ RealClimate equating to WSWT. Swanson & Tsonis is described there as no more than a hypothesis that “episodes” have occurred. (And Swanson didn’t call the “pause”. That was already ‘called’.)

          It is interesting comparing Swanson & Curry. One proposes a “contentious … hypothesis” while the other insists on world policy accounting for a “bonkers” ‘prediction’. What is ‘contentious’ and what is ‘bonkers’ is down to presentation.
          Swanson concludes “Nature (with hopefully some constructive input from humans) will decide the global warming question based upon climate sensitivity, net radiative forcing, and oceanic storage of heat, not on the type of multi-decadal time scale variability we are discussing here. However, this apparent impulsive behavior [ie their “episodes”] explicitly highlights the fact that humanity is poking a complex, nonlinear system with GHG forcing – and that there are no guarantees to how the climate may respond.”
          And Curry concludes “Yes. Keep on poking it. No regrets.”

        • JCH,
          I did get round to reading Swanson & Tsonis (2009) (and had a quick squint at some likely references) but found it a poor paper given its controversial nature. (If you are going out on a limb, you need to be very clear and very accurate. This paper is neither.) There are too many questions about their work that they do not discuss – so much so that it leaves me to wonder if they have even asked the questions for themselves, let alone achieve answers that do not collapse their hypothesis.

          I suppose the big knee-to-the-groin is what you find when you ‘Peel away … ENSO.’
          It is generally accepted that the major wobbles on the global temperature record is ENSO and that volcanoes also leave a mark, as do the solar cycles if you look for them. Foster & Rahmstorf (2011) show that when ENSOVol&Sol are peeled away for 1979-2010, it reveals a linear increase in temperature for that period. My understanding is that the linear trend continues (see for instance second graph here for 1979-2012.)
          What I find is missing is any attempts (for its own sake ) to peel off ENSOVol&Sol for the years before 1979, before the satellite data. With the caveat that the result will be less accurate (using SSN for TSI etc & with the temperature records becoming less ‘global’), it should establish something of the global temperature trend. The exercise has been done by Zhou & Tung (2013) which I am reluctant to reference (the work they presented looks a tad dodgy to me) but their graph (Fig 1a) of temperature minus ENSOVol&Sol 1865-2000 is presented as the blue trace in the main graph here and shows wobbles with peaks and troughs at 1910, 1940 & 1970. This sort of fits reasonably with the equivalent timings of ‘synchronisation & rising coupling’ within Swanson & Tsonis.
          But what of the S&RC episode timed for 2001-2? There is an absence of wobble at this the most controversial episode. Within the global temperature record, the “hiatus” can be accounted for by ENSOVol&Sol. It is not a repeat of 1910, 1940 & 1970 because the temperature wobble is absent and remains so a decade later!
          So, does that not conclusively pull the rug out from under Swanson & Tsonis (2009)?

    25. Was this mentioned as a reference on uncertainty?

      http://www.scitechnol.com/2327-4581/2327-4581-1-107.php#

      Muller RA, Wurtele J, Rohde R, Jacobsen R, Perlmutter S, et al. (2013) Earth Atmospheric Land Surface Temperature and Station Quality in the Contiguous United States. Geoinfor Geostat: An Overview 1:3. doi:10.4172/2327-4581.1000107

      “et al.” is: Arthur Rosenfeld, Judith Curry, Donald Groom, Charlotte Wickham, Steven Mosher

      The abstract ends:

      “… The absence of a statistically significant difference indicates that these networks of stations can reliably discern temperature trends even when individual stations have nominally poor quality rankings. This result suggests that the estimates of systematic uncertainty were overly “conservative” and that changes in temperature can be deduced even with poorly rated sites.”