Open Thread

For stuff not on-topic in other threads.

147 responses to “Open Thread

  1. Thank you for opening this thread!

    I have a couple questions that have come up in response to two posts that Coby Beck has done in response to a deniers analysis of Canadian temperatures. The first is and the second is

    The denier, Richard, chose stations with long continuous records and looked at the annual Tmax, that is the highest temperature per yer. He plotted these with a moving 10 year average, saw that it looked like there was a downward trend and declared victory over AGW.

    This reminded me of the post here on the Russian heat wave and extreme value analysis. As an amateur, the annual Tmax doesn’t quite seem like it is extreme enough but it also wouldn’t be normally distributed. So, what would one do to test whether the annual Tmax really was declining or whether it was just noise (for these stations)?

    On these same data, Richard claimed that there was a natural cycle or pendulum effect because one years Tmax predicted the next. An example of his evidence was “At 27C, it’s only a 5% chance the next year will be cooler, but a 70% chance it will be hotter”. My thought is that to do a real significance test, you could do a chi-square goodness of fit test. The 5% would be the observed outcomes and the expected would be the number of annual Tmaxs below 27C. Does that make sense? Is there a cleaner or more powerful approach to see whether the annual Tmax is related.

    I should note that Chris S. has done some work on this already, but not with the clarity and thoroughness of a post here and he isn’t around to answer my questions. His site on this:

    • The volatility of Tmax is the problem.
      I did a simple exercise in Excel-take the Oklahoma City normal Tmax for the 1971-2000 period (the current normals). I then created a 100 year series of artificial data with a trend on each day of 1.5 F/century (F or C doesn’t matter) and a standard deviation of 7.5 degrees. The first time I ran it, the trend in the annual mean for temperature of that series was 1.59 with the 95% bounds of 1.31 and 1.87. The trend in the annual max temperature was 1.22 with 95% bounds of -1.01 and 3.55. The highest temperature in the series actually occurred in the 52nd year and the highest 11-year average was years 43-53.

      Taking just the maxes can lead to all kinds of interesting “patterns” even in data where you put a signal into it. That’s been one of the nice things about Tamino’s work-look at the impact of an analysis procedure on data witb a known (and controllable) signal.

      • Thank you, I think that’s a great idea.

        Unfortunately, I don’t know how to create the artificial data in excel. Do you have a guide that details how?

      • I use the RC format in Excel, but the basic equation looks like:


        The R1C1 refers to a cell with the standard deviation of temperature in it. RC1 is the first column and contains the normal temperature for that day. The 0.015 is the trend in degrees/year. COLUMN() refers to the column number, which can be interpreted as the year, so that in the second column, you get COLUMN()-2=0 for the first year and in the 101st column, you get COLUMN()-2=99, the 100th year. Vary the 0.015 (I should have put it in as a constant in a cell that’s referred to), the standard deviation, etc.

      • Taking just the Maxes makes it an extreme value problem. Blueshift, you can make “artificial data” in Excel using the random number generators–not perfect, but they’ll do for most purposes. You can impose a Gausian noise on a linear trend and look at 10000 realizations of 100 points each pretty straightforwardly.

  2. blueshift: I almost missed the key word. *Canadian* temperatures.

    With such a small part of the world, and only a small selection of the available weather stations even then… why should anyone expect a result that in any way relates to global trends?

    Okay, I held my nose and glanced at the post. Not even Canada, just Southern Ontario. And most of the graphs aren’t Southern Ontario, but one single lucky station.

    Why waste time? There must be thousands of brainless bloggers doing the same exercise after reading something ill-advised. The 10% who find some cooling somewhere shout about it, and we should really point out their idiocy briefly and pass on by. The work people have put into this is already way more than such drivel is worth.

  3. I have ventured into Curryland and found a gem. A fellow by the name of Vaughn Pratt, Professor of Computer Science at Stanford, provides not only comic relief but some very thought provoking analysis while dukeing it out with the flying monkeys.
    Check out his web site too. It’s full of gems.
    There might be some material in this for Tamino to expand on.

  4. blueshift,

    I’m supposing that his Tmax values were simply the maximum daily Tmax value for each year in the dataset, and not the highest daily/monthly mean?

    If so, there’s so many problems with this it’s difficult to know where to start.

    For a start, Tmax can decrease while the mean is increasing, if for instance the variance is decreasing.

    Secondly, we know that nights and winters are warming faster than days and summers, and an annual Tmax is almost invariably going to be measuring a summers day, so it’s probably one of the least sensitive metrics.

    Third, you have an issue with frequency and sustainedness. If you use this metric, a year with a one off day where the temperature soars to 40C is going to be equal to a year with a month long heatwave where 50% of the days in that month reach 40C.

    In short, there’s a very good reason annual averages are used, and an annual Tmax is a very poor way to get a picture of what is happening to the climate. That would be enough of an argument for any serious investigator, but then again, you are dealing with a d-word.

  5. Pete Dunkelberg

    A fantastic gateway to IPCC AR4 ! I’m just spreading the word.

  6. Many thanks for providing the data sets from GISS etc in easily accessible format.

    The correspondence between the monthly temperature data sets of GISS, CRU, NCDC, RSS and UAH is indeed striking, but much less so when you calculate the respective average annual anomalies of zGISS etc.

    For example, here is the linear trend for zGISS’ annual average:

    y = 0.0168x – 0.1853
    R² = 0.7085

    And here that for zCRU:

    y = 0.004x – 0.1379
    R² = 0.4552

    It is much flatter than zGISS.

    The respective #5 polynomials are also quite different, with that for GISS showing a dip by 2009, while that for zCRU has a definite downturn and a very high R2 (0.9679):

    zGISS annual poly: y = 1E-07×5 – 9E-06×4 + 0.0003×3 – 0.0031×2 + 0.019x – 0.1315
    R² = 0.73
    zCRU annual poly: y = 3E-07×5 – 3E-05×4 + 0.001×3 – 0.0126×2 + 0.0556x – 0.152
    R² = 0.9679

    Contrary to your implication, the 2 satellite series unlike zGISS and zCRU both have a pronounced overall negative linear trend with the down beginning as early as 1994-96:

    zUAH: y = -0.0002x + 0.0382
    R² = 0.0007
    zRSS: y = -0.0004x – 0.0068
    R² = 0.0053

    The low linear R2s are to be expected given the shape of the annual anomalies curves (up to 1994, down since), but the #5 polynomials both have very high R2:

    zUAH: y = -1E-07×5 + 9E-06×4 – 0.0003×3 + 0.0019×2 + 0.0166x – 0.0989
    R² = 0.9793

    zRSS: y = -8E-08×5 + 5E-06×4 – 7E-05×3 – 0.0009×2 + 0.0293x – 0.1431
    R² = 0.9363

    The explanation is that unlike the linear trends the polynomials almost perfectly capture the ENSO effects which swamp the linear growth in [CO2].

    It is certainly not the case that the satellite data actually confirm the GISS, CRU, and NCDC data, and in all 5 sets, whether annual or monthly, the polynomials provide better fits than the linear.

    [Response: Why not just fit a polynomial of degree 381 to the monthly data? Then you could have R^2=1 — which by your reasoning would be the best fit of all.

    Your analysis is “not even wrong.”]

    • Not only are the polynomial fits ‘not even wrong’, the linear fits appear to be just plain wrong. The one for zGISS is about right, but the other three are just nonsense. All four annual trends are comparable in magnitude, and none of them are negative.

  7. Hello again and thanks for the responses. I should have been clearer earlier: Richard’s analysis is painful to try to follow for a lot of reasons and probably not worth the effort. I know that even if his results were statistically significant for these stations, it would at most be mildly interesting not the threat to AGW he imagines it to be.

    I am however curious to test my own understanding of statistics and temperature analysis. Which I freely admit is rudimentary.

    FJM, yes you are right it is the maximum daily Tmax value for each year. Which made me think of the following post by Tamino regarding the Russian heat wave

  8. Tim Curtin,
    1)Short-term trends are meaningless.
    2)During the period in which you claim the satellite data show a negative trend we lost trillions of tons of ice. Gee, which to believe: physics or a dataset calculated across 5 or more different satellites and already shown to be much less reliable than the land-based temperature data.
    3)Try fitting your polynomials using AIC as a fitting criterion rather than R^2 . What matters is predictive power, not minimum error on existing data.
    4)Crack a physics textbook ferchrissake!

  9. Vaughan is on the right side of this issue, but he and I don’t get along. He accused me of dishonesty because I wouldn’t alter my web page on Miskolczi to say that Earth’s surface was the right reference level to test potential energy of the atmosphere for–when I was talking about the virial theorem, for which the atmosphere and planet would have to be orbit around one another. After that I filtered his hostile butt. I don’t need computer scientists telling me I got my physics wrong, and am dishonest if I don’t post the wrong information.

  10. I am really disappointed by the above response.

    First, because reductio ad absurdum is seldom a scientific response.

    Secondly, Excel offers only #6 polynomials, not #381.

    Why, if #5 offers better fits (i.e. higher R2s) than linear is that “not even wrong”? I really would like to know.

    Surely you would agree that annual temperatures are highly variable, non-linear, and clearly synchronous with ENSO (hence the big La Nina freeze at present across both Northern Europe and much of northern America not to mention SE Australia, where in high midsummer we are now having unprecedented snowfalls of 30 cm in our so-called “Alps”, not much above 2,00o metres?

    What happened to your Open Mind?

    [Response: Keeping an open mind doesn’t mean letting your brain fall out.

    It’s IMPOSSIBLE for a linear fit to have a greater R^2 value than a higher-order polynomial, even if the data are noise-free linear trend. But in most cases (including this one) the increase in R^2 is because you’re fitting the noise, not the signal — so for understanding the signal it’s a meaningless exercise in curve-fitting.

    Clearly you’re ignorant of this (and other) facts, yet you insisted that you’re right and I’m wrong. How disappointing is that?]

  11. Ray Ladbury: I merely used the data base kindly provided by our host, consisting of 3 surface based series and 2 (not 5 as you claim) satellites, for the period since 1978, quite long enough for most climatologists.

    You need to use some shoe leather. Go to tamino’s data sets and prove me wrong. ALL the results I cite use ONLY those data.

  12. Tamino
    There’s no point in debating anything with Tim Curtin… See Deltoid for some background, Tim Lambert just gives Curtin his own thread and lets people have fun with his ‘theories’.

  13. Tim Curtin, it is not shoe leather that will help you learn climate science. Use your brain instead. You utterly ignored the criticisms I cited. Short-term trends are not climate. 30 years is the minimum time needed to establish a climate trend. You can also find on Tamino’s site lots of data on melting ice. Try rationalizing that with your contention that temperature is not rising.

    Satellites are tricky beasts. Datasets that bridge multiple platforms and instruments are always hard to interpret. All datasets show warming, and trends are consistent within errors. Go learn some science and then come back.

  14. Tim, You are now talking model comparison. READ. LEARN.

  15. And speaking of AIC, I posted this before. I’m wondering if anyone else has ever used Kullback-Liebler related metrics as a “goodness of fit” when fitting distributions. They seem to do a much better job of fitting the tails of a distribution than do metrics like least-squares. It stands to reason that they’d work as the K-L distance gives a measure of the degree to which two distributions disagree. To date, I’ve obtained best results using the sum of the squared differential K-L distance:

    Sigma: p(x)^2*[ln(p(x))-ln(q(x))],
    where p(x) is the empirical distribution and q(x) is the form you are fitting to p(x). Anybody else ever look at this? It seems it could be very useful when fitting distributions where the tials have large consequences.

    [Response: Fascinating.

    But: why is the initial p(x) squared? The K-L divergence doesn’t have this factor. (and just to pick some real nits, it’s “Leibler” not “Liebler.”]

    • Tamino,
      Oops! The logarithmic difference is also squared.

      p(x)^2* [ln(p(x))-ln(q(x))]^2

      So really, what I’ve been using is the square of the differential of the K-L distance I played around with various related metrics. If you use the straight K-L, of course positive and negative differences can cancel, so this doesn’t quite work. Squaring the differential inside the integral/sum gives a better overall result. (BTW, thanks for the correction.)

  16. TRC Curtin: if Excel is the only tool you have, then you may want to just quit before you embarrass yourself further.

    It’s not that Excel doesn’t do an excellent job of doing what it’s designed for. It just wasn’t designed for this.

    Still, you can get more out of Excel if you use it correctly.

    Take the very first numbers you calculated – trends for GISS and CRU. You got it wrong. I don’t know how, since it’s so easy, but you did. I imagine you made an error trying to work out annual averages. I used the data as provided and got 0.17 degrees per decade for GISS and 0.16 for CRU. A tiny difference.

    If you messed with the data and managed to get a huge difference, it is because you made a mistake. When I performed this little exercise, I got EXACTLY THE SAME TRENDS as for the monthly data. Big surprise there. Not.

    Tamino is correct that you’re not even wrong. But some of what you say is built from smaller parts, which by themselves are just wrong. If you are interested, I’m sure people will help you identify the errors and understand the problem.

  17. TRC Curtin: Why, if #5 offers better fits (i.e. higher R2s) than linear is that “not even wrong”? I really would like to know.

    BPL: Check the t-statistics on your coefficients. Do partial-F tests.

  18. Ray: I don’t think a textbook will help TRC Curtin. He can’t manage reading comprehension. Tim, Ray said 5 or more satellites. Not data series. Satellites. You know what those are, right?

  19. Why have the posts on this blog prior to March this year disappeared? I think it’s a pity as useful stuff has been lost – I’ve searched for previous posts I remembered and been surprised not to find anything. If this is to continue then it makes it a lot less comfortable to link here.

    • Ed, sorry. It happened in the wake of an altercation involving my little community’s commenting contrarian (who’s yet another guy with airport ties…odd, that), that was focused on me.
      I’d say more, but I have applied duct tape to mouth and fingers in order not to.

  20. Ah, sorry to hear that. I’m glad it doesn’t seem like it’ll be an ongoing thing. If there’s any chance that posts not involved in the altercation can pop back up then that would be most welcome.

  21. Request – could anyone(s) view this climate collage (link) I put together, and critique it?
    (the goal was to communicate the essentials w/o driving viewers away or assaulting their vision with too many things at once; I want to laminate & post it where it will be seen.)

    • Re: Anna Haynes

      While it’s probably not necessary to label the Yulsman graphic with the word “Anomaly”, it probably does need to have its baseline (1950-1980) associated with it.

      If you’re looking for a impactful graphic to add, this one is tough to top.

      The Yooper

    • Hi, Anna. I like your collage, and think it’s a great idea. I’m thinking that the “real world” needs more of our input, not just the Intertubes, and you’re doing something that accomplishes that.

      That said, I do have some thoughts/reactions. Would you like me to contact you off the forum? If so, I think that you can just click on my name to go to my website, which in turn has an email button.

      If you’d rather I just comment directly here, let me know.

    • Kevin’s comments helped a lot, and I’ve incorporated the “CO2 over 650k years” graph, in the latest version (link).
      (the added narrative-such-as-it-is takes away from the visual impact of the intermediate version, though… though hopefully it will reduce “what am i looking at” brainstrain in the newbie viewer.)

      Feedback is still welcome, but most welcome if it’s “go work on something more important now.”

  22. Thanks to all for some rather amazing comments on my posts here. Ignoring the ad homs from people like Nathan, here, after correcting serious errors in my annual averages for which I apologise, are my preliminary responses to some of my critics:
    Barton Paul Levenson said “Check the t-statistics on your coefficients. Do partial-F tests”. Excel does the least squares fit for the linear trend temperatures against time (months or years). OK, instead, regressing Tamino’s zGISS monthly anomalies on months we get (R2= 0.52; F=417.72):
    Coefficients Standard Error t Stat P-value
    Intercept -33.0601 1.6340 -20.2330 0.0000
    Months 0.0167 0.0008 20.4383 0.0000
    0.20088169 oC per annum
    2.00881685 oC per decade
    20.0881685 oC per century

    That seems, to quote Tamino, “not even to be wrong”, but insane!
    However regressing the annual average anomaly data for zGISS, we get (R2=0.711, F=71.48):

    Coefficients Standard Error t Stat P-value
    Intercept -32.915 3.903 -8.433 0.000
    Years 0.017 0.002 8.455 0.000
    0.1655 oC per decade
    1.6548 oC per century
    That seems more reasonable, pace the IPCC’s AR4 and Didactylos.
    And for annual zRSS (R2=0.503, F = 290.37)
    Coefficients Standard Error t Stat P-value
    Intercept -28.9723754 5.36104564 -5.40424 8.25E-06
    Years 0.01457157 0.002688562 5.41984 7.9E-06
    0.14571573 oC per decade
    1.45715726 oC per century
    The percentage difference between the forecasts for 2110 of GISS and RSS is actually quite large, at 13.6%.

    Turning to the polynomial (#5) best fits for the annual zGISS, CRU, RSS, UAH trends from 1979 to 1009, we find that all four data sets agree that there has been a downturn since 2006. I apologise again for my errors in computing annual averages of Tamino’s anomalies, but will he ever admit that there does appear to be a downtrend in those anomalies once you get away from the simplistic linear trends that he and all at RC and IPCC love to death?

    As I said at my beginning here, my polynomials capture the ENSO effects to end 2009 that the IPCC’s and Tamino’s linear trends totally ignore, and that the severe NH winter currently observable for 2010-2011 confirms in spades.

    [Response: When will you admit to yourself that you really don’t know what you’re doing?

    The polynomial fits are NOT statistically significant. The “downturn” is in the NOISE not the signal. The noise is not just measurement error, it’s genuine noise in the physical phenomenon itself — global average temperature is a partly stochastic process. All you’ve shown is that the noise affects a meaningless polynomial fit. Big F’ing deal, that’s something that not only can happen, it must happen. For you to interpret it as a downturn in the warming signal which shows how wrong the IPCC is, is foolish. For you to insist on it even after you’ve been told the error of your ways, is arrogant as hell — especially since your ignorance of how to do the analysis is rather glaring.

    There are other issues too, especially the fact that the noise is not “white noise.” Maybe we can discuss that if you first admit that you are wrong — I’m not interested in educating the stubbornly arrogant ignorant.

    It’s great for you to play around with numbers and explore ideas, but stop believing you’re right and the IPCC is wrong, because you’re not. Admit that you fell victim to “Dunning-Kruger” and you might get some respect. Otherwise, stop bothering us.]

    • Tim, I take it that you didn’t take my advice and look at the article on Akaike Information Criterion. Pity. You would have learned an important lesson. The thing you are trying to maximize is predictive power, not goodness of fit. There is not enough information in the data to justify a 5th order polynomial. It just ain’t there. So your basic contention is BS.

    • Tim, the reason the monthly regression seems ‘insane’ to you is simply that you’re misinterpreting it. The coefficient in the regression represents the warming produced by an increase in the ‘time’ value of one unit. The data may be monthly, but the time data are measured in years, so that January 1979 is represented as 1979.04, February 1979 is 1979.13, and so on. So the annual warming is 0.0167 K, almost identical to that derived using the annual mean, as it should be.

      I would also point out that the difference between the GISS and RSS trends is within one standard error for either coefficient. And those standard errors are likely too small, since this simple regression doesn’t account for autocorrelation.

  23. @Tamino, re Tim Curtin:

    Don’t bother. This is the same Tim Curtin who, over at Deltoid blog, spent dozens of posts defending his argument that ocean acidification is a good thing, because a neutral pH ocean would become a great freshwater lake, and could then be used for irrigation and drinking.

    I’m not making that up.

    See, he read a paper that derived a high-order (5th?) polynomial interpolation fit that included both salinity and total alkalinity (along with temp and rainfall, if I remember correctly). Actually, several interoplating function – they were different for each ocean basin. I’d cite it, but I can’t be bothered to go back and find that thread on Lambert’s blog.

    Curtin decided that ‘total alkalinity’ meant pH (!), and that therefore these interpolating functions derived a predictive relationship between salinity and pH (!). He then extrapolated that high-order polynomial interpolating function (!), and deduced that at a total alkalinity that he somehow decided was neutral pH (!), that salinity would fall to near zero (!).

    Voila. World’s water problems, solved by massively high {CO2}, and massively acidified oceans.. And all it took to get there was a stunning and immovable ignorance of elementary chemistry, mathematics, and statistics.

    Yes, he was serious. No, I’m still not making this up. Curtin’s ‘contributions’ aren’t worth the effort it takes to approve his comments.

  24. The NY Times has a nice piece (only mildly polluted by Lindzen squawking about clouds) about the revolutionary work of Charles David Keeling on measuring atmospheric CO2.

    It made me wonder if anyone here knows of a trend graph of the size of the annual increments–not the total–of CO2 PPMV going back to the beginning of Keeling’s observations in the early ’50s. Anyone have a link?

  25. @Adam R.

    Jim Hansen updates different graphs each month on his website:

    This may be what you’re looking for:
    “Annual change of CO2 for each month, i.e. change from previous January to this January etc.”

    Click to access dCO2_MaunaLoa.pdf

  26. David B. Benson

    In the 2010 Oct issue of the Notices of the American Mathematical Society (v. 57 #9) Professor of Mathematical Statistics O. Haggstrom [umlauts on a and o] (Chalmers) reviews

    Zilliak & McCloskey
    The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives
    U. Michigan Press, 2008.

    Really? It seems that economists Zilliak & McCloskey (Z&M) are not actually complaining about statistical significance but rather than all too many papers in several fields (they claim, but deeply consider only economics), after establishing statistical significance then fail to go on to establish subject matter significance, resulting in what Z&M call sizeless science. From the review: To sum up, if statistical practice is as bad as the authors say, what should be done? No easy fix is offered, but they do advocate a larger degree of pluralism among statistical methods. … Many of the authors’ comments seem to imply a commitment to the Bayesian paradigm, but it is not clear whether they are really aware of this. …

    Ok, a worthy challenge: how does one establish subject matter significance of otherwise statistically significant findings?

  27. Ray Ladbury: Thanks, I did check AKAIKE. Goodness of fit is very important, even if ignored by IPCC WG1 of AR4. Reliance of linear GCMs based on poorly fit linear trends for projecting future trends is mistaken. Hopefully AR5 will follow your advice, I see no mention of Akaike in AR4.

    My last was intended only to correct my error in calculating average annual anomalies from tamino’s monthly data. I will next apply Akaike. I think Google provides better guides than Wiki, but thanks very much for making your point.


    [Response: First: there are issues involved with applying AIC to time series with strong autocorrelation. I doubt you have the knowledge base, or the proper appreciation of your astounding ignorance, to comprehend them.

    Second: you are a hopeless victim of Dunning-Kruger. Goodbye.]

    • Reliance of linear GCMs…

      Linear GCMs?

    • Tim,
      I’m sorry, but that’s just sad. GCMs are dynamical models, not “linear models”. Please, please, please, take some time to educate yourself about the science so that you will at least understand what you are arguing against.

  28. @Michael T

    The Columbia graph is exactly it, thanks.

  29. I’m not a math person, so I need some help here.

    Looking at this graph:

    It takes around 155 years for anthropogenic portion of atmospheric CO2 to reach approximately 27 ppm (to grow from 0 ppm to around 27 ppm.) It took 60 years to add 81 additional ppm (27 + 81 = 108.)

    If we continue the current growth in CO2 emissions, we will add an additional 172 ppm by 2050 (27 + 81 + 172 = 280,) which doubles the pre-industrial level of 280 ppm.

    This is not like a financial calculation as the pre-industrial 280 ppm is not producing additional CO2 (like interest.) It is like coins in an inherited piggy bank. The only way to get more coins into the bank is for the heir to earn additional coins to put in it (human activity.)

    For BAU, what is correct way to state the rate of growth of the anthropogenic portion of atmospheric CO2?

    • JCH, It’s a bit more complicated than that. A bit over half of the CO2 humans have produced has gone into the oceans, so there’s the question of whether the oceans continue to absorb at the same rate or begin to saturate. Then there is the question of how much CO2 is released as permafrost melts and begins to outgas or as clathrates melt. There is also the question of the mix of fuels in your “BAU” scenario. Assuming the current mix is almost certainly wrong due to Peak Oil, Peak Nat. Gas… Coal adds more CO2 per Joule of energy. However, for the sake of estimation, you could assume fossil fuel consumption scales roughly as global energy demand, which is about a percent lower than the global economy growth rate. Then assume half the CO2 stays in the atmosphere.

  30. Interesting data; Canadian maximum temperatures for Dec. 23rd: Windsor, Ontario(across river from Detroit): 0.0 degrees Celsius. Alert Airport( 82 degrees north–most northerly point in Canada): -2 degres celsius. Warming climate in the north!

    • Yeah; my brother lives in Windsor. I’m in the Atlanta area, and last week we were colder than Narsarsuaq, Greenland. (If I’ve remembered how to spell that.) Very weird patterns, to be sure.

      Of course, one of the frustrating things about this is that the low population density of these Arctic areas, compared with the areas in Europe and Eastern North America that are getting thoroughly chilled, means a lot more PR bang from the latter than the former.

      I think some folks also have a tendency to equate the “mental space” taken up by remote Arctic areas with the physical space that they occupy. Of course, that is not accurate; for instance, Hudson Bay and the Fox Basin together occupy about a million km2–more than 4 times the surface area of the UK.

  31. This is most assuredly off-topic, but hey what’re open threads for?

    In my travels of the interwebs I often come across links to posts by Tamino which evidently added greatly to the level of understanding in the discussion (or in some of the more WTF ones, just went some way to reduce the stupid to tolerable levels). But bugger it, the links are dead. Is there not an archive of older posts from Open Mind?

  32. Stu N,
    The Yooper posted a link to archived versions.*/
    It would be really helpful if someone had time to index these.

    • Thanks Ray, Merry Christmas to you.

      It may be impertinent to suggest this, but given the hours Tamino must put in to his usually rather technical blog posts, perhaps he has an organised archive of his own that he could upload?

      Is it thanks to wordpress that these old posts disappear? I will admit that I find it very frustrating. The wayback machine certainly can’t be relied upon to preserve every post!

  33. If you think that WUWT is bad when Watts is around, just check the first thread that went up now that he is absent for the holidays: It purports to show Trenberth is wrong by linking to a paper on solar activity and temperatures that actually completely supports what Trenberth said…and actually adds additional evidence in support of his statement! Now, the author Alec Rawls is desperately trying to claim…on no evidence whatsoever… that temperature data since that paper was published contradicts the claim that they made! It’s fun to watch!

    • Even Bob Tisdale is hammering him. I like the point that Usoskin et al. used the MBH99 reconstruction to demonstrate their point.

      It’s funny to watch the crew swallow this, hook, line and sinker. Waiting for smokey to chime in, though.

      • OK, I couldn’t resist, took a peek, and was amply rewarded by heaps of stupid.

        Deech56, you’ll be glad to hear that Smokey has chimed in, and the level of discourse is already so low his posts have done nothing to lower it!

    • Apparently, that thread wasn’t stupid enough because now they started another thread denying the modern CO2 rise is anthropogenic!

      The Christmas holidays would be so boring without WUWT to provide the entertainment!

      • That post’s so stupid that even David Scott Springer (of “Uncommon Descent” fame) points out that it’s stupid …

        That takes uncommon skill on the part of the poster.

        On the other hand, it’s all dressed up sciencey-style with references and diagrams and all that, which has impressed regulars over there.

      • So one cannot trust proxy data and computer models over direct observation unless they give the right answer. Of course saying that nothing’s certain is just as good as a “right” answer. Nothing like selective skepticism.

      • Yeah…Just when things there can’t get any stupider than they already are, they get even stupider again. It is simply mind-boggling.

      • Watts will come back and limit the stupid to high school diploma level, never fear!

      • BTW, Joel, thanks for wading in as often as you do – when people link to a WUWT article I look for your comments to save me the time of debunking the claim being made.

      • Horatio Algeranon

        Some of us actually look forward to it:

        I’m dreaming of a Watt Christmas…

      • Thanks, Deech56! It sometimes seems like a fool’s errand spending so much time over there. So, it is good to hear that my posts there do come in useful.

      • Now, it’s gotten even sillier yet again…In a followup post , Dave Middleton is now trying to demonstrate there’s a correlation between sampling rate and the value of CO2 concentration that you get for the ice core data. The problem is that he does this by comparing two cores, Law Dome and Taylor Dome, but the Law Dome data measures such recent levels that it has post-1900 CO2 air in the ice, so what he attributes as an increase in CO2 due to sampling rate is just an increase in CO2 due to making measurements since anthropogenic CO2 began significantly raising the levels!

        I have explained this in a comment there last night but alas my posts seem to be delayed there. (I think they are automatically put into the SPAM folder and then have to be recovered by the moderators by hand.) So, it still hasn’t appeared.

        In the meantime, one of the commenters has completely misread a paper on diffusion of CO2 in the ice cores to conclude that it is a very significant problem when the paper actually basically concludes that it is pretty much insignificant.

      • Rattus Norvegicus


        It appears as though you are persona non-grata there now. Not to worry though, Ferdinand Englebeen is taking Middleton to task quite nicely.

      • I probably should have kept quiet… ;-)

      • Oh wait – the posts are there, but buried upthread.

      • Well, I guess it was a slow weekend there on WUWT after the holidays, because they (in particular, Joe D’Aleo), have now switched from AGW-denial to stratospheric ozone depletion denial…and nearly all the commenters are supportive of it:

    • I did like Mike D’s comment, which I am sure was sarcastic.

      • Here’s hoping YOU were being sarcastic! If not, right-click Mike D’s name and go to ‘his’ website. Browse that site. Do wear some protective clothing and remove anything from your immediate vicinity that can break and/or spill. Dubrasich is a true Wattsian. He most assuredly was NOT being sarcastic.

      • Western Institute for Study of the Environment … W.I.S.E.

        As in the so-called “Wise Use” movement, i.e. pro-exploitation, anti-conservation management of public lands.

        There’s some interesting fantastic, conspiracy theory level crap on that site, like this:

        3. Weyerhaeuser dreamed up and engineered the Northwest Forest Plan. The roots go back to Arkansas when Bill Clinton was governor and a Weyerhaeuser puppet. Big W is the largest landowner in AK, in case you didn’t know. With Slick Willy as Pres, Big W seized the opportunity to shut down 25 million acres (much more than that eventually) of Fed land (esp. high site Douglas-fir land).

        As someone who was involved in the events that led to the NW Forest Plan, and who knew (and has worked with) some of the biologists who helped cobble it together, let me just say that the claim is false and I’ll leave it at that.

      • That was a serious comment?! I think I’ve been Poe’d. Thanks for the correction, all.

  34. Tamino created a post about determining the minimum length of the temperature record necessary to give a definitive trend. I think it was about 21 years. Anybody have the address? I can’t find it.

    • Re: “How Long?”

      It is indeed gone. Here’s a summary of Tamino’s conclusion from that post:

      “That does not mean that there’s been no warming trend in those 15 years — or in the last 10, or 9, or 8, or 7, or 6 years, or three and a half days. It only means that the trend cannot be established with statistical signficance. Of course, it’s another common denialist theme that “there’s been no warming.” This too is a fool’s argument; any such claims are only statements about the noise, not about the trend. It’s the trend that matters, and is cause for great concern, and there’s no evidence at all that the trend has reversed, or even slowed.”

      The Yooper

  35. Richard, I recall the post you’re talking about. It’s called ‘How long’.

    Unfortunately it’s disappeared into the black hole I was talking about. I don’t know if there’s any way to retrieve it. This is the URL:

    …but it leads nowhere. This page can’t be found on the internet archive. It was an excellent post which I wish I could re-read.

    • Thanks Stu, Daniel, that’s the one.

      Does anybody remember the process? I’m guessing it was fit a trend and show a normal distribution of the residuals. Is that the right approach. And how do I show a normal distribution of residuals over different time periods? e.g. What looks good at 15 years might go Pete Tong at 17.

  36. What is it with so many “skeptics” that they start off with “I’m a retired engineer with 43 years’ experience at blah-de-blah, and I’ve looked at this global warming nonsense, and let me tell you, back in my day, if we didn’t file triplicate copies of form ACH-9173/R on our work, we’d be out on our asses. How do these charlatans get away with it?”

    I imagine these fellows’ wives, in order to keep their hang-around-the-house hubbies occupied and out of the way, told ’em to get on the computer, leave the email from the grandkids alone, but please find something to keep themselves busy.

    • Hmm, the “Get off my lawn” theory of denialism.

    • One problem with that statement of being a retired engineer with 43 years experience is most times there’s a lack of proof that he is what he claims to be…
      He might be what he claims, but it’s also just as likely that he could be a kid in his mommy’s basement or a hired gun working at a right wing think tank.

      Another problem with an engineer spouting off about AGW is the simple fact that he is out of his league.
      That Phd in engineering is worth diddly squat when “tresspassing” in other fields of expertise. That’s when the term “piled higher and deeper” comes to mind.


    If anyone knows a way to generate an index, that would help!

  38. Part 2 of Index, 2008:

    Jan 9, 2008
    Dead Heat

    Jan 11, 2008
    Hit You Where You Live

    Jan 17, 2008
    Down Under

    Jan 24, 2008
    Global Temperature from GISS, NCDC, HadCRU

    Jan 28, 2008
    Data Links

    Jan 31, 2008
    You Bet!

    Feb 3, 2008
    Exclamation Points !!!

    Feb 5, 2008
    Open Thread

    Feb 8, 2008
    Outstanding Video

    Feb 16, 2008
    PCA, part 1

    Feb 20, 2008
    PCA, part 2

    Feb 21, 2008
    Practical PCA

    Feb 25, 2008
    One of these things is not like the others

    Feb 28, 2008
    Hansen’s Bulldog

    Mar 2, 2008
    What’s Up With That?

    Mar 5, 2008
    Open Thread

    Mar 6, 2008
    PCA part 4: non-centered hockey sticks

    Mar 14, 2008
    Water World

    Mar 19, 2008
    PCA part 5: Non-Centered PCA, and Multiple Regressions

    Mar 20, 2008
    A More Perfect Union

    Mar 22, 2008
    Open Thread

    Mar 22, 2008

    Mar 26, 2008
    Recent Climate Observations Compared to (IPCC) Projections

    Mar 27, 2008
    How Not to Analyze Data, part 1

    Mar 29, 2008
    Get Real!

    Mar 30, 2008
    How Not to Analyze Data, part Deux

    April 1, 2008
    How Not to Analyze Data, part 3

    April 2, 2008
    Open Thread

    April 4, 2008

    April 5, 2008
    Stalking the Elusive Solar-cycle/Temperature Connection

    April 7, 2008
    How Not to Analyze Data, part 4

    April 8, 2008
    Shocking … uh … Surprising … um … Notable … well … Rather Ordinary News from Mauna Loa

    April 9, 2008
    Summer Snow

    April 11, 2008
    Open Thread

    April 15, 2008

    April 16, 2008

    April 19, 2008
    City of Musicians

    April 20, 2008
    World Wide Web of Science

    April 23, 2008
    Note to Readers

    May 4, 2008
    Highs and Lows

    May 7, 2008

    May 10, 2008
    Open Thread (#2)

    May 13, 2008
    Attack of the 50-foot Tornado

    May 18, 2008
    Decadal Trends

    May 27, 2008
    PDO: the Pacific Decadal Oscillation

    May 30, 2008
    Drought in Australia

    June 8, 2008
    Victoria Rainfall Fall Rain

    June 10, 2008

    June 12, 2008
    The Big Thaw

    June 27, 2008
    Open Thread #3

    July 2, 2008
    Cycling Carbon

    July 8, 2008
    Happy Happy Joy Joy!

    July 8, 2008
    Heat Wave

    July 8, 2008
    Cold Comfort

    July 10, 2008
    Reverend Bayes

    July 13, 2008

    July 13, 2008
    Dalton Gang

    July 17, 2008
    Arctic Ice Update

    July 20, 2008
    Cold War

    July 21, 2008
    Jury Duty

    July 25, 2008
    Open Thread #4

    July 28, 2008
    Spencer’s Folly

    July 30, 2008
    Spencer’s Folly 2

    Aug 1, 2008
    Spencer’s Folly 3

    Aug 3, 2008

    Aug 4, 2008
    New Kid in Town

    Aug 4, 2008
    To AR1 or not to AR1?

    Aug 5, 2008
    Revising Mauna Loa CO2 Monthly Data

    Aug 7, 2008
    A (brief) Tale of Three Sites

    Aug 8, 2008
    Yet More CO2

    Aug 10, 2008
    Open Thread #5

    Aug 13, 2008
    Sea Ice Hyperbole

    As I said previously, I’ll try & get these up over at Skeptical Science as hyperlinks.

    Merry Christmas to all, and to all a good night!

    The Yooper

  39. Daniel,
    Many thanks for the thoughtful Xmas/Newtonmas present. It was just what I wanted!

  40. FYI:

    Over on Neven’s Arctic Sea Ice blog, a reader has done an analysis of the Arctic sea ice loss, with a log-fit (R2=0.933253) showing 2011 as the last full-ice year:

    So much for the “recovery”.

    The Yooper

    • For me, this has problems. Over a short interval of any polynomial series, you can get an excellent exponential fit, and it’s completely meaningless.

      If there was a physical basis for an exponential fit, then I would be much more convinced. I think we can agree that the linear fit is poor, but arguments can be made for a step function or the quadratic fit.

      The geometry of the problem (ice is 3 dimensional, and melts on all face due to various processes) suggests that a cubic fit might be justified. But I’m the first to point out that I’m really reaching with that one!

      I think the most important thing here is that a statistical model is unlikely to do much good, and dynamic model simulations show that the ice loss curve (extent) is going to be roughly ogive in shape (S-shaped, usually called ogee in architecture).

      If I had to make a prediction, I would say that the Arctic will be fully navigable by 2015, but extent and volume measures will remain above zero.

      • I quite agree that random curve fitting is not a profitable exercise (the above trainwreck from Tim Curtin is not the first time I’ve seen his high-order-polynomial-fit-to-the-noise-fail). But by way of background, the discussion at Neven’s wasn’t really intended as an attempt to predict an ice-free date exactly. A lot of air has been expended there and at Patrick Lockerby’s Chatterbox about trends in ice extent / area but it’s seeming more likely that volume is a more relevant trend to be looking at. While extent projections deliver “ice free” forecasts in the 50 year range, the PIOMAS graph, with a linear trend, points to a 10-20 year timeframe.

        But to me (and some others), the PIOMAS linear trend doesn’t look right, and a curve would be a better fit (to the trend, not just the noise). So we took a punt at that to see if it made a difference, and found that it brought it into the very near future – numbers that agree with Maslowski’s bleak projections. What constitutes “ice-free” and the likelihood that the real curve is an ogive have been discussed there as well. I have neither the chops nor the tools to forecast that accurately, but was fishing for someone who might. In any case, it seems that the tail on that s-curve would likely be a low volume for a fairly short period.

        It was not so much about producing a date to bet on, but really just to illustrate what a difference you get by analysing volumes rather than extents, and to show that even using volume, the PIOMAS linear trend is probably too optimistic. If someone feels like giving it a more rigorous treatment than I have / can, I’d be very interested to see what comes out.

        For now, I know that I don’t know enough to base any analysis on a physical model. With so many uncertain feedbacks, I doubt even a snowman could do so confidently.

      • I’m not pretending to be an ice expert.

        But a naive analysis would indicate that future winters will produce a significant amount of first year ice every year, for a significant time to come. Winter extent has been changing more slowly than summer extent. Also, first year ice thickness has reduced, but not all that much (it’s hard to say exactly without CryoSat).

        Therefore, the rapid volume loss associated with loss of multi-year ice (with a corresponding reduction in average thickness) is going to end. Future thickness reduction will be from melting alone, starting from a similar “nearly all first year ice” starting point every year.

        Hence the prediction of a tail. But the tail will arrive sooner than the AR4 models forecast. Nor may it survive so long. But if there is no tail at all, I will be surprised.

        Taking this any further would require splitting the volume loss into loss from vanishing multi-year ice and loss from melting alone (and don’t ask me about increased ice transport). This is further than I want to go right now.

      • Frank D.
        I agree that volume may well be the more relevant parameter, and that it is not unlikely that melting would accelerate as volumes decrease (and surface to volume ratios increase). However, when it comes to curve fitting, adding additional parameters requires that they improve the fit to the data exponentially (as measured by likelihood, anyway).

        I think that your best bet would be to fit “expected volume” with some gaussian noise around it and see where you hit the sweet spot.

  41. What about this post in WUWT:

    CO2: Ice Cores vs. Plant Stomata

    It states that:
    -Plant stomata suggest that the pre-industrial CO2 levels were commonly in the 360 to 390ppmv range.
    (source: Kouwenberg, 2004. APPLICATION OF CONIFER NEEDLES IN THE RECONSTRUCTION OF HOLOCENE CO2 LEVELS. PhD Thesis. Laboratory of Palaeobotany and Palynology, University of Utrecht.)

    -Plant stomata data show much greater variability of atmospheric CO2 over the last 1,000 years than the ice cores and that CO2 levels have often been between 300 and 340ppmv over the last millennium, including a 120ppmv rise from the late 12th Century through the mid 14th Century.
    (source: Kouwenberg et al., 2005.”Atmospheric CO2 fluctuations during the last millennium reconstructed by stomatal frequency analysis of Tsuga heterophylla needles”. GEOLOGY, January 2005.)

    -A recent study (Van Hoof et al., 2005) demonstrated that the ice core CO2 data essentially represent a low-frequency, century to multi-century moving average of past atmospheric CO2 levels.The stomata data routinely show that atmospheric CO2 levels were higher than the ice cores do.
    (source:Van Hoof et al., 2005. “Atmospheric CO2 during the 13th century AD: reconciliation of data from ice core measurements and stomatal frequency analysis”. Tellus (2005), 57B, 351–355.)

    This is not the tipical nonsensical stuff found in blogs like WattsUpWithThat. Tamino, what do you think about these studies that show high levels of CO2 in recent past?

    (By the way it seems like an interesting material for a post)

    • Do you have any idea how uncertain and sketchy the stomata results are? Very.

      Yes, this is exactly the typical nonsense we expect from WTFUWT. We have excellent, highly reliable CO2 data for the recent past, that fits extremely well with the excellent, highly reliable direct ice core data. The different ice cores also show excellent agreement with each other.

      We also know from our present readings that CO2 doesn’t fluctuate wildly on century or sub-century timescales. It is true that our CO2 readings are lower frequency the further back we go. But that’s a red herring. We have very high frequency data for the last millennium.

      Anybody taking highly unreliable proxy results with huge uncertainties, and pretending that they trump well established science and direct measurement is an idiot. There is no contradiction here. Stomata are simply unreliable.

      And Watts is an idiot. But hopefully you know that already.

      The nice thing about actually checking facts and knowing things is I could write this entire comment without having to read whatever nonsense Watts wrote. Yay!

  42. On the subject of missing posts, there was once a thread about humorous graphics and diagrams about global warming.

    One of them was a venn diagram along the lines of ‘percentage of glaciers retreating’ (large circle) and ‘percentage of glaciers advancing’ (small circle) and a smaller circle within that representing the glaciers ‘sceptics’ pay attention to.

    I can’t find it in the archives or image search – anyone remember the source of that or have a copy?

  43. VanHoof, the main author on many stomata/CO2 studies, writes:
    “The magnitude of the observed CO2 variability implies that inferred changes in CO2 radiative forcing are of a similar magnitude as variations ascribed to other forcing mechanisms (e.g. solar forcing and volcanism), therefore challenging the IPCC concept of CO2 as an insignificant
    preindustrial climate forcing factor.”
    Plant Ecology, Volume 183, Number 2, 237-243
    DOI: 10.1007/s11258-005-9021-3
    Stomatal index response of Quercus robur and Quercus petraea to the anthropogenic atmospheric CO2 increase
    Thomas B. van Hoof, Wolfram M. Kürschner, Friederike Wagner and Henk Visscher

    The category is “climate sensitivity”
    Watts finds this sort of information reassuring because, why?

    • The number of sources of error for stomatal proxies is quite ridiculous.

      Stomatal densities vary from leaf to leaf just on a single tree. Stomatal response varies significantly between species. So even if we had a time machine, it would be hard to reconstruct CO2 from stomata.

      But we don’t have a time machine. Instead, we have imperfect fossil remains, and often there is difficulty even identifying what species the leaf fragment is from. There is even more difficulty in precisely dating the leaf.

      Then there’s the CO2 ceiling.

      It’s amazing they get results at all. The massive variance and uncertainty, though – it’s hard to call any of the results useful in any way. Yet.

      • David B. Benson

        Here are useful results about the Miocene based on leaf stomata counts.

        The impact of Miocene atmospheric carbon dioxide fluctuations on climate and the evolution of terrestrial ecosystems

        Wolfram M. Kürschner , Zlatko Kvaček, and David L. Dilcher

      • It’s good to see the 2-dimensional error bars. They somehow don’t make the transfer to multi-proxy studies I have seen.

        The data does show some self-consistency, which is encouraging. It means this method is beginning to overcome some of the merely statistical sources of error (but they repeatedly note that the number of observations is low). However, there is still significant disagreement with other proxy records, and no simple way to reconcile them. The CO2 ceiling, and the fact that we simply can’t know exactly how prehistoric plants responded to CO2 are both unquantifiable sources of error.

        Is this all useful? It’s good science, and it may provide the basis for more useful work. But it’s not exactly a clear answer.

  44. Links to the available lost Open Mind posts are now available at:

    I’d advise saving the ones you cherish offline to keep them for posterity.

    The Yooper

  45. “Like I’ve said upthread, I hope we get a couple of weak solar cycles so that a fairly direct comparison of CO2 vs solar forcings can be made.”
    -comment by denier
    windansea in one of the 2007 threads

    Now that we’ve reached the end of the warmest decade in the record, coincident with a weak solar cycle, I wonder if he is happy his hope was fulfilled?

  46. Here’s an interesting post at RealClimate that uses a recent bacteria study to show that scientists are not afraid to go after the “consensus” when they think the science is wrong:

  47. Possibly a silly question…

    Fourier analysis of a discrete, regularly-spaced sample of size N, is usually conducted on, and only on, N integer multiples of the fundamental frequency (1/N). E.g. the periodogram might be calculated for frequencies f=j/n {j: 0, 1, 2, …, N-1} or (if N odd) {j: -(N-1)/2, …, 0, …, +(N-1)/2}. What’s so special about those frequencies compared to e.g. f=j/2N {j: 0, 1, …, 2N-1} or f=j/39758 {j: 0, 1, …, 39757} ? They capture all the information in the sample? They’re the only frequencies useful for inverting the DFT?

    [Response: The “standard” frequencies as usually defined all satisfy the *periodic boundary condition*. They do indeed capture all the information in the sample (which can be reconstructed, given those discrete Fourier components). The trig functions for those frequencies also form a complete orthogonal basis for functions defined on the set of observation times. And, all the Fourier components for those frequencies are independent.

    All told, there are clear reasons the “standard” set of test frequencies is useful and important. But you can still compute the Fourier transform for other frequencies and there may be good reason to do so.]

  48. Brill… there’s a lot of meat there… thank you. I hadn’t realized there are implications for independence, since I’m going to create confidence intervals for the periodogram using kernel smoothing, that’s useful information. I thought the choice of frequencies was odd given that e.g. the frequency responses of window functions or FIR filters are calculated with a higher resolution so that f approaches a continuous function. And I’ve read many sources which introduced the “standard” frequencies but none explained why! Thank you again.

  49. I think it’s really positive that old Tamino posts have been unearthed and widely linked (par exemple here and here :

    Personally, I have been very encouraged by the Open Mind bitesize approach, and found many hooks thereat to hang my learning on.

    I have been attempting to crunch my way through sufficient research to be able to make a coherent argument regarding “smoothing” of global temperature data, justifying the inherent usefulness of “subtracting” indices (indexes) of major internal climate variability from the raw data to be able to get at the trends underneath.

    Several key people keep cropping up (zum beispiel Professor P. D. Jones), and when trying to assert who had the idea first, guess who’s name dropped out of the Internet Archives ? Tamino, of course.

    I would like to be able to credit this approach to you, but since I don’t know your real name, I can’t :-

    But we will all know, you and I and everybody who follows your work, that it was you who thunk it ab initio.

    [Response: Thanks, but I’d be surprised if that were actually true.]

  50. Thanks to those who resurrected the 2008 01 “You Bet!” post. I’ve always liked that setup and now I’m playing with a modified version it:

  51. And now for something completely different:
    In my day job, one of the biggest problems we have is trying to do statistical inference with very limited data. I’m in the process of developing a multi-stage Bayesian method that allows use of data that is slightly less representative to develop priors that you can then use to increase the strength of conclusions about reliability there. Nothing terribly new there.

    The question I had was about how to choose a distribution to do inference with based on a prior over parameters of the distribution. Specifically, one could pick the most probable distribution or one could in effect average over the parameter space treating the parameters as nuissance parameters to get an “expected” distribution for the quantity you are interested in. I’m finding that this averaging approach tends to give significantly better results, especially when you have limited data. Anyone else ever look at this?

    [Response: Yes — it’s a very popular approach. In some circumstances, using the most probable distribution is so restrictive it doesn’t even get you in the ballpark, but averaging over parameter space gives very robust results. Enjoy.]

  52. I’ve been playing a bit with Wolfram Alpha, but they don’t provide yet our favorite data series about co2, temperatures, ice concentration… Do you know how “we all” could push in that direction, so we have a new tool to manipulate the data (and for me, probably doing new errors!)?

    (and Hurray for The Trooper for resurrecting all these unique posts)

  53. Just wondering who might be keeping track of all those predictions of imminent cooling – who made them, when they were made and how they compare so far to real data?

    I also notice the “no warming since 1998” meme continues – the focus of this article being UAH satellite data, specifically for the month of october….

    “The smoothed running average for October was level with the 1998 figure – showing that for the past 12 years, there’s been no global warming. ”
    The whole article is so full of sciency sounding stuff that it’s hard to know where to start with tracking what it gets wrong. Even for someone with limited expertise like myself outright nonsense just jumps out.

    My ability to interpret data directly is minimal but it’s clear that 1998 is a cherry picked starting point – a strong el Nino and the end of 2010 is experiencing a strong la Nina. And the msu-amsu data shows ENSO strongly. Would it be of any value to see temperature data adjusted for ENSO or does that pose problems?

  54. Hi Tamino.

    I was wondering whether or not you have seen this paper. Maybe it is something that you would like to speak about at some point?

    In contrast, and perhaps more essential to address, is the following paper by Paulo Cesar Soares, which is now being touted by the contrarians (same journal which just published a dodgy paper by Knox and Douglass on OHC):

    • Maple Leaf, Gavin Schmidt has called the Soares paper perhaps the worst he’s seen–and I agree. It’s crap.

      • Hi Ray,

        Thanks. Figured as much. Apparently Soares was looking at monthly CO2 and T data.

        A Google search shows the paper being trumpted widely from DenialDepot to every other contrarian blog out there.

  55. Sorry Tamino,

    The first paper I was referring to was this one:

  56. Søren Rosdahl Jensen

    David B. Benson

    Yes I have considered to include JMA’s data. The trouble is that I can only find gridded data. I have made an quick attempt to do the area weighting to make a global average, but it doesn’t work. The Scilab code is here:

    Comments are welcome. It is written in Scilab so the code is easy to read for those who are familiar with Matlab or Octave.

  57. Useful resource for info on studying tree-rings…

    “The Ultimate Tree-Ring Web Pages”

    … very slow even on a fast connection, but works well in a text browser.

  58. The new phone book new 2010 temp data is in:

    This just in: 2010 ties 2005 for coldesthottest year on record!

    ( NASA agrees )

    Yet more hot stuff (the “good” news keeps on coming, doesn’t it?):

    I hear they’ll be growing palm trees by Hudson Bay any time now…

    And here’s why they call it “up” north:

    Now that’s Polar Amplification in action!

    (H/T to Joe Romm at Climate Progress

    The Yooper

  59. Sorry, used tags instead of

    My bad.

  60. I’m looking for a good resource with statistics related to extreme weather event frequency and how current events (Russian heat wave, flooding, etc) fit into a historical context. Much appreciated.

  61. Anyone…

    When a regression model contains a lagged dependent variable;

    yhat[t] = B[0] + B[1] y[t-1] + B[2] x[t] + e[t]

    and I want to do (many steps ahead) out of sample prediction, obviously I don’t have y[t-1] to predict y[t], so use yhat[t-1] instead, iteratively, starting from the last observed value of y[t]. Simulation shows this works well, but any caveats?

  62. Hello Tamino,
    If you haven’t seen it, Paul Krugman has a blog post up regarding changes in temperature extremes with warming trend. It seems like he’s got it all right, but he mentions that he’s been focusing on the issue. Still he might appreciate your insights.

    I’ll drop a note over there to check this site out too.

  63. I understand global warming only on the most basic level being a non-science person. I’ve asked this in another blog but just in case no-one answers me I’m asking here as well. In light of many denialist posts which point out the abnormally cold winters recently experienced in many parts of the US and Europe as an argument against global warming, would someone here care to explain to me in layman’s terms the probable causes of those extreme winters and whatever links there might be, if any, to global warming. Thanks.

    • The majority view is that normally you will have a strong Arctic Oscillation where the Polar Vortex (Arctic cyclone) is strengthened by the extreme cold of the polar region and serves to largely isolate it from the rest of the Northern Hemisphere. However, with so much Arctic sea ice having melted and taking so long to reform during the winter more warm water is exposed to the Arctic atmosphere, warming it. This results in a weakened Polar Vortex and consequent strongly negative Arctic Oscillation. The Arctic is no longer as isolated from the rest of the Northern Hemisphere nor the rest of the Northern Hemisphere from it and cold Arctic air spills over into North America and Northern Europe.

      But places further North see extreme warm anomalies:

      Canada sees staggering mildness as planet’s high-pressure record is “obliterated”
      Ostro explains how global warming is changing the weather
      January 23, 2011
      With regard to Europe at least there is a minority view that with more open water in the Arctic you are seeing more moisture entering the atmosphere such that those places which may be warmer but are still below freezing (e.g., parts of Siberia) will see more snow. Since the latter has a high albedo, this has a cooling effect over parts of Europe.

      Frankly I find the former more credible and as having greater explanatory power, but I am a philosophy major turned computer programmer in unrelated fields.

    • Re: Ig Noramus

      There is an excellent post on that topic over at Skeptical Science, here.

      Hope that helps,

      The Yooper

    • Nothing wrong with Timothy’s explanation but if you want it dead simple – it’s been really warm in the arctic, and the cold air that’s normally there has been pushed south.

      For example, Hudson’s Bay took forever to freeze this winter (if indeed it’s actually frozen over entirely at all, I haven’t looked in awhile), and when it was snowing hard in Europe and the eastern US, it was *raining* in Greenland.

  64. Ig–Since you are asking a sincere question, I’d hardly call you an ignoramus. I’m sure you’ve heard this before, but you need to keep in mind the distinction between climate and weather. Weather is local and short-term. Climate is long-term and regional to global. There are lots of influences that can change the weather–quasi-cyclic patterns in ocean circulation like El Nino and others, volcanic eruptions, the 11-year solar cycle, etc. Whatever effect these influences have on the weather, we know one thing–they will eventually change. There are far fewer influences that manifest in the climate, which is determined by global, long-term energy balance of the planet. Right now, we are having a rather deep La Nina, which is leading to cool temparatures over much of the US and Europe. However, Canada and the Arctic are up to 20 degrees warmer than the average, and Arctic sea ice was at its lowest level ever for January last month.

    The difference between climate and weather is like the difference between investing and day-trading. For climate and investing, you are relying on long-term trends. For weather and day-trading, you are depending on short-term fluctuations. Which one will you trust more?

    • La Nina brings them out. It’s like the full moon. During the 2007 – 2008 La Nina they were actual physicists commenting on Real Climate that multi-year arctic ice would be in full recovery by 2010.

      Now we have this whole legion of people who think Tsonis & Swanson have proven natural variability caused 20 Century warming.

      In terms of climate crazies, La Nina = full moon.

  65. Hi Tamino,

    Pielke and Watts are already highlighting the following paper featured at Pielke’s place. I thought that the random walk issue had been addressed in the lengthy “debate” at Bart’s place? Also, according to some analysis undertaken by BPL, the std. deviation of the global SAT record plateaus around 40 years., which point to 40 years perhaps being more desirable.

    Anyhow, it seems the contrarians and denialists are giddy over this paper, so perhaps something needs to be said. Also, it won’t be the first time that GRL published something by contrarians that was wrong ;)

    [Response: No, it won’t be.

    Every few months the denialists get giddy over a new paper that they claim “blows the lid off global warming.” A few months later, they have to find another one.]

  66. Thanks to everyone who replied to my post. Considering the amount of info there is out there on this topic, I feel bad for not making a greater effort to find it on my own. Thanks for being helpful anyway. With all this learning, it wont be long before I can give myself a more exalted name (Notso Ignorant?). Btw, reading through some of the past discussions, I love it how Tamino deals with deniers who think they understand the maths and science better than professional scientists. ” They’re not even wrong…” hahaha…

    • Ig,
      The quote originally comes from the physicist Wolfgang Pauli when dealing with some particularly inept work. It actually illustrates the very important fact–there are far worse things than being wrong. Wrong is correctable. It may even educate us if we understand why we went wrong. Bullshit is worse than wrong. Vague is worse than wrong. Worst of all is the fear of being wrong–which causes people to express themselves incoherently so they can never be shown wrong.

  67. Here someone on Youtube is responding to a greenman3610 video and questioning the reliability of ice measurements by the GRACE satelites and referring to some papers. Any thoughts on that? Is it OK if I cut and paste the response to rebut him?

    “And here is a paper on the difficult process of converting the data to make maps of it:
    – Converting Release-04 Gravity Coefficients into Maps of Equivalent Water Thickness by Chambers.
    And one can also go to Wikipedia Gravity Recovery and Climate Experiment and watch the Gravity Anomaly Animations on how gravity changes constantly.
    Now stop calling GRACE the holy grail for icesheet measuring!
    Well there’s more on GRACE:
    Evaluation of New GRACE Time-Variable Gravity Data over the Ocean by Don P. Chambers 2006
    “The statistics of the residuals represent an upper bound on the uncertainty ofthe GRACE data, as it ignores errors in both the Jason-1 and steric model and any non-seasonal steric variations that are in the altimetry but not in the steric-correction model.”
    Just read the conclusion. I think it says enough about the uncertainties of GRACE so far.
    you should also read the paper:
    Interannual variations of the mass balance of the Antarctica and Greenland ice sheets from GRACE by Ramillien et al 2006.
    It’s beautiful and you won’t love it.
    Sorry, but GRACE does not weigh ice sheets. It measures gravity not ice. Changes in gravity can be due to a lot of different things beneath the surface of the ice. Plate tectonics and isostasy also cause gravity changes.
    – Uncertainty in ocean mass trends from GRACE by Quinn et al 2010.
    “but full use of the data requires a detailed understanding of its errors and biases”. “