Fake skeptic draws fake picture of Global Temperature

Clearly, David Whitehouse has enough rope. To hang himself.

The WUWT blog has a post by David Whitehouse (of the “Global Warming Policy Foundation”) discussing global temperature data. It features this graph from the leaked copy of the not-yet-completed 5th assessment report (AR5) of the IPCC (Intergovernmental Panel on Climate Change):


What’s surprising is that Whitehouse spends so little effort discussing the graph itself. Instead, he chooses to paint a perversely false picture of global temperature change.

For one thing, Whitehouse is indignant about a comparison of the global temperature trend over the last 50 years to that over the last 100 years. As Whitehouse says:

In Chapter 2 the report says that the AR4 report in 2007 said that the rate of change global temperature in the most recent 50 years is double that of the past 100 years. This is not true and is an example of blatant cherry-picking. Why choose the past 100 and the past 50 years? If you go back to the start of the instrumental era of global temperature measurements, about 1880 (the accuracy of the data is not as good as later years but there is no reason to dismiss it as AR5 does) then of the 0.8 – 0.9 deg C warming seen since then 0.5 deg C of it, i.e. most, occurred prior to 1940 when anthropogenic effects were minimal (according to the IPCC AR4).

Why choose 50 and 100 years? Because they’re nice round numbers, that’s why. They also make the point — the temperature trend (estimated by linear regression) really is twice as fast over the last 50 years as it was over the last 100 years.

What’s actually not true is Whitehouse’s claim that “of the 0.8 – 0.9 deg C warming seen since then 0.5 deg C of it, i.e. most, occurred prior to 1940.” It takes some real cherry-picking to do that.

The claim that “most of the warming occurred prior to 1940” actually originated, as far as I know, in the mendacious “documentary” The Great Global Warming Swindle by Martin Durkin. To support that claim, Durkin simply faked the data.

Whitehouse just makes the claim, as if asserting it somehow makes it true. Let’s look at some actual data, from NASA. The only way to get 0.5 deg.C warming prior to 1940 is to take the difference between the lowest annual average and the highest annual average during that time period:


Taking the difference between the lowest and highest annual averages includes the noise of year-to-year fluctuations, which makes it a false representation of the actual global warming which took place.

You wanna play that game? OK. Let’s take the difference between the lowest and highest annual averages after 1940. That gives a post-1940 warming of 0.85 deg.C.


That puts the lie to Whitehouse’s claim that “most occurred prior to 1940.” But then, using the range-of-annual-averages method gives a total warming of 1.06 deg.C — Whitehouse’s fake 0.5 isn’t even half of that, let alone “most.”

A better characterization of the net warming is to use the extreme values of a smoothed estimate of global temperature:


That puts the pre-1940 warming at 0.34 deg.C and the post-1940 warming at 0.64 deg.C, with a total range of 0.93 deg.C. Again, the pre-1940 warming isn’t even half, certainly not “most.”

If you insist on partitioning the warming between pre- and post-1940 so the two values add up to the total, a fair estimate gives 0.32 deg.C before and 0.61 deg.C after:


Any way you slice it honestly, the pre-1940 warming is not “most” of the total. It’s not even half. It’s more like a third. David Whitehouse’s characterization is a fake.

As for more recent temperature change, Whitehouse’s viewpoint is perhaps best summed up by this statement of his:

So since 1979 we have has [sic] about 16 years of warming and 16 years of temperature standstill.


First, Whitehouse is pulling an old favorite trick of fake skeptics: equating the lack of statistically significant warming with the lack of warming. If we look at global surface temperature according to the three main data sets (GISS, HadCRUT4, and NCDC), all three show warming over the last 16 years (2012 still has one month to go, but the difference won’t amount to a hill of beans):


Although all three trend lines slope upward, their slopes aren’t statistically significant. But that doesn’t mean they’re not upward. It just means that there’s not enough data in 16 years to tell for sure, statistically speaking, which way they’re going. That always happens — always — when the time span is short.

That’s why fake skeptics like David Whitehouse like to focus on short time spans.

Whitehouse declares that we’ve had “about 16 years of warming and 16 years of temperature standstill.” According to Whitehouse, all the global warming earth has experienced recently was complete by 16 years ago (the end of 1996). Is that really so? Let’s take a closer look at the last 16 years, compared to what happened before that, starting in 1979 (Whitehouse’s choice).

I’ll do a similar exercise to one I’ve done before. I’ll take the data from 1979 (Whitehouse’s starting point) up to 16 years ago (less one month, the end of 1996). Then I’ll fit a trend line with least-squares regression. Then I’ll extend that trend line to the present, to see what would have happened if that trend continued.

If David Whitehouse is giving an honest portrayal of global temperature, then most of the data points from 1997 onward should be below the extended trend line, since the trend will keep rising but according to him, global warming stopped back then.

If we do this exercise using data from NASA GISS, we get this:


Let’s try using the HadCRUT4 data set instead:


Once more into the fray: let’s try using the NCDC data set instead:


Well well … Plainly, it is not the case that most of the data points from 1997 onward are below the extended trend line. As a matter of fact, for all three data sets, every year from 1997 onward has been hotter than expected according to that trend from 1979. There’s only one valid conclusion: David Whitehouse gave a fake portrayal of global temperature.

Thats what fake skeptics do.

What about the plot from the draft of the AR5 report? It compares projections based on multi-model averages from FAR (first assessment report), SAR (second assessment report), TAR (third assessment report) and AR4 (fourth assessment report) to observations (annual averages) from NASA GISS, HadCRUT4, and NCDC. In my opinion, there is a flaw in how the comparison is done. I don’t suspect it’s an intentional mistake, but I do believe it’s a mistake.

The flaw is this: all the series (both projections and observations) are aligned at 1990. But observations include random year-to-year fluctuations, whereas the projections do not because the average of multiple models averages those out. Using a single-year baseline (1990) offsets all subsequent years by the fluctuation of that baseline year. Instead, the projections should be aligned to the value due to the existing trend in observations at 1990.

Aligning the projections with a single extra-hot year makes the projections seem too hot, so observations are too cool by comparison. This is indeed a mistake — it would be just as much a mistake to align the projections with a single extra-cool year (like 1992), which would make the projections too cool and observations too hot by comparison.

We can estimate the observational fluctuation that by fitting a smoothed curve to the observed data to estimate a nonlinear trend, and noting the difference between the 1990 value and the smoothed value. Doing so indicates that GISS and HadCRUT4 are both about 0.12 deg.C hotter than the existing trend, while NCDC is about 0.10 deg.C hotter.

Fortunately, the draft version of the AR5 report gives the actual data used to plot the projections for FAR, SAR, and TAR (but not for AR4). So, we can make our own version of the comparison. Here’s the comparison as presently done in the report, with all observed series aligned to projections in 1990:


What should be done is to offset the observations so that the hotter-than-average 1990 really is hotter than average. When I offset the observations by 0.1 deg.C, we get more realistic comparison of observations to projections:


It turns out that observed global temperature has gone “right down the middle” of the IPCC projections. But, fake skeptics want you to believe otherwise. That’s what fake skeptics do.

There’s another way we can compare projections to observations from 1990 to the present; we can compute the trend estimated by linear regression. When we do that, we get this:


Once again, observed global temperature has gone “right down the middle” of the IPCC projections.

Yet fake skeptics like David Whitehouse are trying their best to spin the AR5 report (which isn’t even published yet) in order to discredit global warming science. David Whitehouse also chose to paint a picture of global temperature which is as fake as can be.

That’s what fake skeptics do.


81 responses to “Fake skeptic draws fake picture of Global Temperature

  1. David B. Benson

    The term ‘fake skeptic’ is quite kind for such behavior.

  2. Well Tamino, this in my humble opinion has to be one of your best posts yet, and that takes some doing.

    Thanks you for this. I’m sure you have seen that the serial data deleter (aka Pat Michaels) has similarly been lying about the leaked AR5 SOD.

  3. The GCMs don’t actually produce a temperature? They only produce a relative temperature, compared to the starting date?
    I had no idea that was the case. I find that alarming that they can’t produce an absolute temperature prediction, but are limited in accuracy based on where we manually place their starting value.
    Can someone help me to understand this better?

    • Timothy (likes zebras)

      You can understand this better by considering a perfect model.

      There are reasons why a perfect model will not exactly match the opbservational record. One of these is the phenomenon of Chaos – impossible to observe differences in the starting state can lead to very different weather states some way down the line.

      This chaotic weather creates year-to-year changes in the global temperature that are hard to predict, but over a long enough period (say 30 years or so), will average out to nothing.

      The model simulations are all started from the mid-19th century, so the chances that they will time the El-Nino events of the late 20th century correctly are low. They won’t match the noise in the year-to-year changes in the global temperature.

      Thererfore, to compare the trend in the models with the trend in reality, it helps a lot if you don’t allow this noise in the starting year to affect your comparison.

      Looking again at the graph, I think what has happened is that the person who created it wanted to show how the models correctly captured the dip in temperatures due to Pinatubo, and that was why they aligned the models with observations just before then. It’s a bit of a messy comparison, because some of the IPCC projections were done when Pinatubo was in the future, so was not known about, and some when it was in the past.

    • Relax, the GCMs do produce an absolute temperature. The issue is around the correct way to compare that temperature with observations, given that the observations (and the models) include natural year-to-year variation. That’s why it doesn’t make sense to compare the absolute temperature from a model with the observed absolute temperature for that particular year. Since it is the change in temperature that we are interested in, using temperature anomalies (which are just the absolute temperatures relative to some baseline) is the correct thing to do.

      • I don’t follow, so please bear with me:
        Shouldn’t the absolute temperature matter a whole lot, since the earth behaves very differently are different temperatures? (At higher temperatures, it will radiate more heat and have a more pronounce GHE from water vapor, to just name 2)
        Wouldn’t manually moving the predicted range of temperatures up or down be a pretty big deviation from what the models predict? Did Whitehouse pull a fast one by giving a very narrow early margin of error, or was that part of the actual model’s results?
        The GCM’s seem to be programmed with all the current observations at the time of the model runs, so fairly specific starting point would seem to be automatic. I get that the temperature is chaotic, no matter how well you try to capture the current readings, you will never be able to accurately describe the present conditions, and substantial year-to-year variations are expected. But isn’t that included in what the uncertainty range of the model forecast is showing? Isn’t that why we should also compare the model results to a smoothed temperature graph? Wouldn’t that cut down on the noise and make for an accurate comparison?
        If Foster and Rahmsdorf are correct (just read some of their paper), then wouldn’t the prior climate models be wrong, but only b/c we needed to know more to program them better and once we do that we can get a more accurate result? (I want the models to get more accurate, so if Foster and Rahmsdorf are correct, the GCMs just need to include their better modeling data and feedbacks)

        [Response: I’m no expert on the computer models, but as far as I know …

        The models are not initialized with observed data up to the present (meaning when the model is run). Instead, they’re programmed with the physics, than go through a “spin up” phase until climate stabilizes (with stable forcings, say pre-industrial values). *Then* the forcings are changed according to observations (of the forcings, not the climate) and the response is recorded. That’s why comparing model-estimated data (including temperature) to known observations (for the entire 20th century, say) is a valid test of their usefulness. It’s not as good as testing predictions against future observations of course, but it does indicate whether or not they give the correct response to changes in climate forcing.

        And that’s the real function of climate models: to estimate the response of the climate system to changes in climate forcing.

        The only model I know of which uses “up-to-date” observations is the “DePreSys” model which uses ocean temperature data to improve decadal preditions.

        But as I say, I’m no expert on the computer models of climate.]

      • Jack, my previous answer was a bit vague, so let me try to clarify.

        As Tamino says, the models do not include observations up to the present – they are started with average conditions at some time long past (1880 is commonly used). They then run using the observed forcings (e.g. solar output, GHG concentrations, volcanic activity) and produce a view of how the climate evolves over time.

        If you compare the observations, e.g. surface temperatures, you’ll find that no one model reproduces the temps exactly, because of the short-term randomness of the climate and limitations of the model, but the envelope of the models combined output should (and does) contain the observations. Also, any single model should track observations fairly well, or it would be considered a poor model.

        What this post addresses is the single question of how well do the models reproduce observed temperature trends post-1990. Since no individual model will have produced the correct temperature at 1990, and 1990 itself will not lie exactly on the trend, you have to align the models with the expected (i.e. on the trend) value to start the comparison. This is the right thing to do in order to answer the (very specific) question about post-1990 trends. If we were trying to answer a different question, we would look at the data in a different way.

        If you want to look at how well the models are doing in general, head over to realclimate.org and look for their annual update of model-data comparisons. (Sorry Tamino, didn’t know how to make a link for this: http://www.realclimate.org/index.php/archives/2012/02/2011-updates-to-model-data-comparisons/) There you’ll see a plot of the model outcomes vs. the observations. Hopefully that’ll set your mind at ease.

      • 1) I think I understand the GCMs much better, thank you for clarifying and helping me.
        2) I’m not sure I have explained my discomfort very well, but I think I can better codify it as I am better informed:
        While we generally look at global temperatures in terms of the anomoly, they are all measured in actual degrees (no temperature gauge has a scale of ‘degrees anomoly’). To make the temperature records more meaningful, we normalize them and report them in terms of the anomoly from a defined point. It wouldn’t help anyone to see consistently divergent temperature readings, it just clutters the graph.
        Similarly, the GCM’s have to have absolute temperatures programmed in them (long wave radiation is a function of degrees Kelvin, not degrees anomoly), so they must deal in specific and absolute temperatures (using absolute in contrast to relative).
        While normalizing a trend is common practice in climate science, it still needs to be done with trepidation. If Tamino changed the temperature anomoly to have it rise 0.1 C (same effect as lowering the model temperature 0.1 C,), I’m sure many people would have screamed bloody hell (“You are adjusting the temperature to make the earth match the model”). Moving the model temperature has fewer automatic responses, so is safer.
        However, when the models were first published, they were normalized to a specific point and time on the anomoly. To me, this seems to be an important distinction, as it sets the ground rules for future comparison and it is also a point where the absolute temperatures in the model correlates to a real-world reference. If Whitehouse moved the model to normalize it to a new point, it would seem to me that he’s breaking the model. If Tamino normalized it to a different point, it would seem he/she (sorry, I am not sure the correct pronoun for Tamino) is breaking the model. Adjusting it in 1990 will also move everything prediction of the model prior to 1990 as well. Given that it is partly gauged on how well it represents data prior to 1990, this could be disastrous to the model’s accuracy. At the very least, showing the effect would shut me up. :)
        So, at the end of all that, I guess my main point is that it seems to me that it’s very important to show the model in it’s original frame of reference and judge it from there. Any normalizations after that have really wide ripples. However, I don’t know enough to graph that myself, I had enough trouble taking the GISS/HadCru3/RSS/UAH temperatures and graphing them. I tried to find it in the IPCC AR4, but didn’t (gave up after 20 minutes, my kids would be ashamed).
        Sorry for the long post and I hope I explained myself better.

        [Response: I think you greatly overestimate the necessity of getting the absolute temperature right — in both models and real-world observations.

        A hypothetical case: imagine building a model of sea level rise. There are a lot of factors that can change sea level, including thermal expansion of seawater, melting of landfast ice, the hydrological cycle transferring water from ocean to land, the biosphere, water storage by human technology, etc. So we build computer models of the total volume of water in the oceans. Then we find that they all agree on how sea level changes, giving outstanding predictions over long periods of time.

        But we also find out that *we don’t know* the actual average absolute depth of the ocean that precisely. Our models don’t agree with each other, and the observations we have aren’t that good either. While we’re very good at measuring *changes* in the depth of the ocean with an accuracy of a millimeter or so, we really don’t know how deep it is on average, not even within 100 meters. But — that doesn’t negate our observational knowledge of how it’s *changing* nor does it invalidate the ability of our computer models to explain its changes.]

  4. Thanks again for another top article, Tamino.

    One of the most common themes of deniers of the past couple of months has been this ‘it hasn’t warmed in 16 years’ nonsense. Maybe to try to distract from the fact that 2012 is hottest year ever recorded in the USA. Or that the last time there was a month below the twentieth century average global temperature was more than 27 years ago. Don’t know what they’ll write about when the next La Nina hits.

    [Response: I think you mean, when the next el Nino hits.]

  5. The leaked draft of IPCC Chapter 9 (Evaluation of Climate Models) shows climate model runs that show the loss of Arctic sea ice. The draft says

    There is very high confidence that CMIP5 models realistically simulate the annual cycle of Arctic sea-ice extent, and there is high confidence that they realistically simulate the trend in Arctic sea-ice extent over the past decades.

    But, looking at the real world data they show from the NSIDC, it seems as if they got it wrong.

    A sea-ice free Arctic (in September) predicted by their models seems to be to be mid century. The NSIDC data suggest it will happen in the next few summers – especially when the sea ice figures from September 2012 are added (see here).

    Tamino can you do some statistical magic on this to see if my eyeballs deceive me?

    [Response: I looked at that question here.]

    • Tamino. Thanks for your reply.

      I had seen your excellent post. The question I meant to ask was “Are the results of the CMIP5 models shown in the draft of Chapter 9 to be regarded as credible?”. They seem to lag the real world considerably.

      Since so many governments and international bodies take the IPCC reports as the best science possible, this gives the world a big problem.

      [Response: Just my opinion — the models don’t yet capture the rapidity of Arctic sea ice loss. As for the IPCC AR5 report, let’s all try to remember that what has been “leaked” is a draft, the report isn’t finished yet and certainly isn’t published.]

  6. Wow … devastating exposure of the tricks climate deniers use, they bend themselves in a pretzel to come to these fake conclusions – I have to tell you sometimes I’m in awe at the lucidity and ease with which you dismantle these fake sceptics – while I’m in awe, David Whitehouse is probably in shock, I mean I’d be embarrassed, I’m embarrassed for him!

    Keep up the good work :)

  7. two comments:

    first: you arbitrarily offsett by 0.1C since you don’t have the simulation-data to just use, e.g., 1989 as start date?

    second: I guess I can anticipate most replies to the following. Nevertheless, since I think the information in this post (and also in the horse-poo-post) is valuable, I would like to propose to disseminate it in a less alienating language. That is, a more widespread audience may possibly get the information if it’s formulated in a way that encourages more to read it, especially those “doubting” but interested in engaging (they exist, really, not sure how many, but they do exist). That is, while I may (or may not) agree on the “fake”-terminology, it certainly puts a lot of people off in the same way like a WUWT-post puts me off if it includes lots of “Alarmism” or lots off innuendo of “scientific corruption”.

  8. reasonablemadness


    I agree, that taking a single year baseline is not a good idea to start drawing the projected temperature rise. In the longterm it doesn’t matter. The drawing should obviously start on a trendline over a longer time period, i.e. 15 years (or the average over such a time period centered on the start year).

    But your graphic has IMO also a flaw, namely that the projections start below the start year and a reader, which is not familiar that 1990 was an above-average year, can be tricked into thinking, that the projections were offset just to align better with observations without any good other reason.

    To make that visually clearer, I would suggest, that you include e.g. the preceding 10 years into the graphic. There is no need that the graphic starts at the same year when the projected temperatures start.

    By doing this, the graphic gets much more clearer and it is visually comprehensible, why the projections start at that point, and not on some arbitrary point.

    I would also urge someone, who has registered at the ipcc as reviewer, to make a comment explaining, why this graphic can be misleading and how to change it, to be more clearer, especially for readers of the general public.

  9. I do hope you bring your findings to the attention to the IPCC so that their graph (#1 in this post) as shown in the draft AR5, can be corrected in the actual AR5.

    Of course, then the fake sceptics will suggest that it’s an attempt to ‘hide the decline’, or some such, and tout the change as proof of a conspiracy.

  10. Another, similar, argument to counter the “16 years of warming and 16 years of temperature standstill” nonsense, can be constructed using the Woodfortrees temperature index (an average of two surface and two satellite lower troposphere temperature series, HADCRUT3, GISTEMP, UAH, RSS) for the last 32 years:

    The “no/little warming for the last 16 years” boils down to this:
    in which warming for the last 16 years appears less than that for the whole previous 32 years.
    But then, if you compare the trend for the first 16 years previous to 1996 with that for the whole 32 years, you get this:
    in which warming for the first 16 years also appears less than that for the whole previous 32 years.
    To the unwary, the first graph suggests warming has decreased in the last 16 years compared with the whole 32 years, and the second that it has increased. Both cannot be true.

    • Wouldn’t that be why we all joke that “figures don’t lie, but liars figure”?
      The casual statistician in me interprets from your description that most of the warming took place between years 11 and 21 in that time period, so a partial run loses much of it to the smoothing process that puts less weight on the extremes.
      It wouldn’t make either one ‘false’, but it helps to describe why we should look for the ways the statistics can be described to mislead us. In this example, healthy skepticism is exactly that, healthy.

      • Philippe Chantreau

        Sure Jack. What’s not healthy is to start looking at temps in 1998 to try to deny the existence of a trend that the time period considered can’t reveal because it’s too short, and because the starting year is an outlier. Which is what you tried to do at SkS. I asked you there what happens if you start your computing in 1996, or 1999, or pretty much any year other than that of the giant El-Nino. Looking at the comments there again today, I still don’t see an answer. Are you going to do it? That would also be a skeptical thing to do.

  11. Why is it the deniers are allowed to pollute public debate with lie after lie but those who recognise the gravity of the situation shown by the science are constrained by the need to talk only of the facts. it is a very unlevel playing field the deniers are forcing us to play on.

  12. Thank you Tamino. I agree with everything said here in your post. I’m not really a big fan of “scenario graphs”. They tend to be “empty”.

    One point though, the second “fixed” (Well done) graph needs a non linear trend for the noisy observations. Non linear because it’s purpose is not for extrapolation, but for clarity of warming signal.

    Sometimes, being a non- fake skeptic simply doesn’t cut it.

    • The second last figure, to be precise. Wouldn’t be very difficult to fit a non linear curve (Tamino is an expert at this) to the average of the three observational data sets. The problem I suppose is this: if you did fit the curve, it may give the impression that the global warming signal is “slowing down” /moving off target. Whether that happens..I don’t know..I think the figure is worthless and empty for this reason. But if your going to bother with it at all, it should be done.

      Like most people I imagine,.. I look at the data and draw a black curve thru it and…………………….it’s moving off course. I agree that this is probably a false interpretation, if you don’t like it then get rid of it or change it.

      1) start a decade earlier (this also would help fix the problem Tamino found).

      2) Add a 30 + year linear trend (start at 1980) to show warming is continuing.

      3) Add a curve to the observations to show the rapid warming that occurred as well as the “slow down”. This black curve would be going “right down the middle” of the linear trend.

  13. The fake skeptics/deniers are running out of straws to grasp. Pathetic doesn’t nearly cover it. The sillyness of their arguments are increasing exponentially with their desperation as they are losing their influence over the public and the MSM.

  14. The draft IPCC figure is a bit odd – all previous such graphs I have seen in IPCC reports have been anomalies with respect to a common reference period – eg 1961-1990 – to avoid such an inaccuracy.

    One expects that this will be corrected during the review process.

  15. I hope the IPCC will fix it before AR5 is published

  16. I feel we are in some kind of bizarre court case with very tricky lawyers on the defence: Imagine being beaten up- the medics treat the serious injuries and log them and a case is made, the defence lawyers: ‘sceptics’ make their case. If instead at looking at the final results but look at the actual attack over time ‘there are whole periods of time when boots and fists were not landing on the victim’. ‘If the jury look at the CCTV it is evident that towards the end of the attack fewer punches were landed on the victim’. ‘Quite clearly the assault became less life threatening and became a little roughing up’.

  17. I would have never though to use the extremes in the annual data as a measure of the actual change in temperature during a time segment. This is certainly a creative and deceptive approach used by the fake skeptics, especially if the same method is not used to characterize the recent warming. Thanks for pointing out this deception!

  18. toto@club-med.so

    Slightly OT, but… Did anybody find out what the grey area is supposed to represent in the IPCC graph?

    It seems to be related with corrections for natural fluctuations and observational uncertainty, but applied to models… But I still don’t get exactly how it is computed.

  19. Horatio Algeranon

    People who live in white houses should not throw paint.

  20. In general I agree with your analysis of this chart, which looks misleading to me too. However, flawed though it is, there is no denying that it is not as warm today as the IPCC predicted in AR4 (if the IPCC models are getting “better” over time then the only one that matters is the latest, that is AR4).

    Looking at the trend of the three observed data sets and the trend implied by TAR (as shown in your last chart) it looks like the observed trend is significantly shallower than that in TAR. All three centre points are firmly below the centre of the TAR trend and the upper bounds of the CRUT and NCDC error bars are barely above the TAR centre point. So the observed trend is plainly not “down the middle” of the most recent IPCC prediction that you are able to chart.

    [Response: This is absurd. Your “spin” is about as misleading as one could imagine.]

    Looking at the original chart it appears (by eyeball) that the trend in AR4 implies a higher central point and higher limit to the bottom error bar than in TAR. So it is likely that when you are able to compare the observed trend vs the AR4 trend that the AR4 trend will appear even higher relative to the observed trend than does TAR.

    [Response: Your “by eyeball” analysis doesn’t impress. And the location of the “central point” has nothing to do with the *trend* — move the whole thing up or down as much as you want, the trend is unchanged.]

    On any measure you care to adopt, the observed trend in the last 22 years or so is clearly less steep than the IPCC predicted in TAR or AR4 and you would do better to promote your interesting analysis of why that might be (or to concentrate on longer term trends) rather than spend so much time trying to deny the bleeding obvious.

    [Response: You would do better to remain silent and be thought a fool, than to open your mouth and remove all doubt.]

    I have not read the draft AR5 report. Is there any mention of their revised predicted trend?

  21. so, if I read your last figure correctly and agree that ‘actual’ observations are ‘right down the middle’ of projections , we can expect between 1 & 1.5 deg C warming by 2112 ?

    [Response: Some people object to the fact that I moderate comments so severely. I’d say that your comment is a perfect illustration of why I do so, and why it’s a good idea. When nonsense like this is excluded, the quality of discussion can rise above idiocy.]

  22. Jack wrote: “devastating exposure of the tricks climate deniers use, they bend themselves in a pretzel to come to these fake conclusions”

    I’m sorry, but that is not correct. What fake skeptics do is bend reality into a pretzel to match the fake conclusion that they started out with.

    • no I’m sorry but I don’t think that’s right either. What fake skeptics do is bend reality into a pretzel then use a hammer to mangle it into the square hole conclusion they started out with… “see! it fits perfectly”

  23. Horatio Algeranon

    “My favorite things”
    — by Horatio Algeranon

    Curry and Roses and jumping white horses
    Painting Whitehouses, divining with dowsers
    Skeptics with foolishness hung with their strings
    These are a few of my favorite things

    “Blog Science” phonies and short trend balonies
    Ding dongs and ding bats and blogging with Tony’s
    Theories that fly with the moon on their wings
    These are a few of my favorite things

    Graphs from fake skeptics with BS statistics
    Skep-fakes that stray like erratic ballistics
    Sea-ice recoveries that melt into springs
    These are a few of my favorite things

    When the blog bites
    When “tee hee” stings
    When I’m feeling sad
    I simply remember my favorite things
    And then I don’t feel so bad

  24. Schrodinger's Cat

    It is still obvious that the models are wrong.

  25. and so where has the warming gone
    a statisical never ending song
    hiding the cold
    aaaahrghg I grow old.

  26. SC: It is still obvious that the models are wrong.

    BPL: Wrong about what, and to what extent?

  27. I wonder if Whitehouse had this piece in mind when he wrote this:

    “You can be a professor of anything these days but there will be someone out there in cyberspace who is smarter, better at statistics and computing, and has more time to focus on key problems. Someone who will ask for the raw data and mercilessly pick away at it, pointing out mistakes that before would have gone unnoticed. This might be uncomfortable for some, but it is undoubtedly good for science that cares nothing for personal feelings. The baloney detection kit is in ten thousand parts and is on the internet.”


    The irony is priceless.

  28. Here is a decadal look at GISS temperature anomalies (http://imageshack.us/photo/my-images/845/gisslotiseptember2012.jpg/ ) – different approach, same conclusion

  29. I’ve done my own work with respect to the apparent disparity between
    the expected Polar amplification warming Arctic and some strong cooling over the continents. The more frequent presence of cyclones hovering over the especially more ice free Arctic Ocean automatically means a greater dominance of anticyclones hanging about the sub-Arctic continents to the South during the cold season. Giving a greater effect during winter where some significant cooling in relatively larger areas slow a bit the linear upwards progression in temperature GT wise.

    To further contradict yet another WUWT fart, any person claiming that there has been a lull in warming is right away an amateur in error worthy of a fake skeptic indeed. Polar sea ice is largely a better metric than the temperature record, the parallel “its cooling” universe guys don’t have anything to back up their crazy ideas with.

  30. From the WUWT link…”The 16-year flatness since mankind has been the prime climatic influence has been the cause of much discussion in the peer-reviewed literature”

    Say what?? Did he just say mankind is the prime climatic influence?…. on the WUWT blog??? Has the apocalypse actually started?

  31. Completely off topic, but I have long thought the recent climate in the Southwest of Australia was one of the clearest examples of ‘warming’.
    We’ve had a spike in the number of shark attacks the last couple of years and now the fisheries (which were reasonably well mananged) are now in decline… Both seem to be due to a marine ‘heatwave’.


    OUr State Govt typically have nothing to do with talking about AGW, they’d prefer to ignore it, but it seems this is so clear they can’t ignore it.

  32. could you graph the air temperature data from the 20 years of the 14 tidal stations around Australia and ask–are they an accurate assessment of the rise in world temperatures—The data only started 20 years ago when the Australian government set up these stations scattered around the continent.

    • ….what

    • Trevor the BoM site has data that extend way back to around Federation (in some places earlier).

      You seriously think Paul Keating funded the BoM in 1992?? No.

    • I think Trevor is talking about data from the meteorological instruments associated with the ABSLMP network of tide gauges that was installed around the Australian coast in the early 1990s.
      You could graph this data – it is all freely available. But you would not expect it to give an accurate assessment of global temperature changes over that time because of the regional variability.
      As another commenter noted there are many longer temperature records available from the Bureau of Meteorology.

  33. Trevor, could you at least try to remain relevant.

  34. OK, I just know that some of you will enjoy the pure wrong-headedness of the comments left by “Gordon Robertson.” Needless to say, he’s been corrected about a billion times–well, lots more than a dozen, anyway–but shows a learning curve that strongly resembles a brick wall.


    Merry Christmas–sardonic laughter is better than none at all, right?

    • “Gordon Robertson”?

      If I remember correctly that fellow was posting at Marohasy’s swamp a few years ago, claiming (amongst other fatuously incorrect things) that AIDS was not caused by HIV, which was according to him a harmless virus – it it existed at all.

      I grew a bit testy with him given that during my 15 years in immunology I spent three working with HIV and HIV patients. He still thought that he knew better.

      Learning curve? With this fellow it has a learning upper asymptote, located at y = 0. There is no lower bound.

      • Yep, that’s him. He also believes that he knows better than a couple of generations of quantum physicists and biologists. And I think I’m forgetting a couple more of his pet denialisms, if that’s a word.

        I really think that he’s done a lot to convince the undecided that AGW is real–inadvertently, of course, unless he’s an all-time champion Poe.

  35. And the response to this devastating critique at WUWT? Why none, of course, save an ad hominem sneer from Watts about Tamino using a ‘fake’ identity.

    This being the same Anthony Watts who allows his moderator ‘dbs’ to log on as a commenter under two separate false names to peddle the site party line.

    Watts and his fanboys never did have much credibility. Now he has none.

    • Is that Smokey / D Boehm that you are talking about? I think it is even worse than that: I have a really hard time getting posts at all critical of what D Boehm says through the moderation. I don’t know if he himself is killing them or other mods are (or they are just vanishing into the spam filters), but it is pretty hypocritical given how Smokey / D Boehm talks about how there is no censoring like over at realclimate. Apparently, these folks are simply not cursed with any self-awareness whatsoever.

    • Whaddaya mean, ‘now’?

  36. That is a priceless rebuttal of fake science.

  37. Yes – we know that Smokey and Dave B Stealey aka moderator ‘dbs’ are one and the same, and apparently, once this came to light we were not meant to notice that ‘D Boehm’ just continued with the exact same asinine Smokey arguments and the same database of un-sourced and unlabelled ‘killer’ charts.

    Smokey/Boehm will always get more than a fair crack of the moderator whip; your posts will be edited, snipped for no good reason, or held up while they formulate their answer. All while the site policy states that ‘Internet phantoms who have cryptic handles, no name, and no real email address get no respect here. If you think your opinion or idea is important, elevate your status by being open and honest. People that use their real name get more respect than phantoms with handles. I encourage open discussion.’ and Guest authors and moderators are expected to adhere to this policy

    You may have noticed that the moderators have now assumed anonymity, signing themselves ‘mod’ instead of using their initials. …

    None of the above has of course, prevented Watts from continuing with his usual attacks on ‘anonymous cowards’.

    I admire your perserverence over there, Joel, as I am now persona non grata at the ‘site that doesn’t censor’ for encouraging them to come clean about this pathetic little subterfuge and regain some respect, I wish you and our host a happy Christmas and keep on fighting the good fight in 2013…

  38. A nit to pick here. The term statistically significant is misleading outside of the gambit of those in on the joke, because results that are less than 2 sigma are not necessarily to be ignored, nor are they ignored outside of regulatory panels and journals. Something else is needed, perhaps statistically almost certain, or virtually certain which better reflect natural language. Everyone dumps on the IPCC language, but it is by far a better and more refined gauge than the “statistically significant”.

    • Eli picks a good nit, and one that has always irked me as I have observed elsewhere.

      My response is to take the numeric option in frequentist contexts, so in Phil Jones’ example as a f’rinstance I reckon that he should have said there was a 94% chance that the warming in that 15 year interval was not a result of random fluctuation. A “statistically significant” incidence of warming is 95% or greater that it is not random, but as that is an arbitrary definition the actual p value should be permitted to stand on its own and provide the context.

      It would have been nigh on impossible for denialists to make a serious case to differentiate between a 94% and a 95% chance that the warming was not random.

      My own inclination is to report p values wherever practicable, and reserve the traditional “significant”, “highly significant” et cetera apellations (and their associated asterisks/significance lettering) for graphs – with suitable explanation of the represented p values in the caption immediately below.

    • Why not use the well-understood language of gambling?

      As I understand it “statistically significant” means 1/19 odds (“nineteen to one on”).

      Phil Jones could have said “OK. It’s not 1/19 only 1/18. Policy makers place your bet.”

      P.S. I had a significant win on the Arctic sea ice minimum this year. (Well significant for me!).

      P.P.S. I am also making a modest yearly bet on a magnitude 9 earthquake – more likely because of the re-weighting of the ice/water load on the Earth. Winnings to go to the survivors!

  39. A) this is saying that the IPCC, for all their brilliance, failed to get the simplest of things right, the start-point for the comparison. Would Hansen or Schmidt agree?

    B) this analysis that the startpoint should be a lower temperature at 1990, could well be correct (you’d need to work out a prior history to know where to put the “average” for 1990, which is not discussed by either party). But, if so, then this means that 2012 is now in the “low” portion of natural variability.

    By 2015 THE HIGHER PORTIONS WILL HAVE TO HAVE COME INTO PLAY, so that the 2015 average is around 0.5C. Before 2015, some high must have occurred at about 0.6C (for the average to be at 0.5C).

    C) if the skeptics are correct, then the current 2012 temps are about the average value of natural variability. The 2015 average would be about 0.25C, with a high of perhaps 0.35C.

    D) having considered points A through C, this blog analysis has set up a falsifiable argument: within 24 months there will be a minimum 0.25C difference between model and observation depending on warmist or skeptic “science”.

    Watts or Tamino? Someone will go down.

    Two years is a very short period of time.

    [Response: Check the multivariate el Nino index and the solar irradiance data. 2012 *is* a year of low natural variability (and don’t forget the lag between el Nino and temperature).

    Eventually those factors will recover to more “average” values, and even foray into the “high” region. But your arbitrary time limit of 2 years is nothing but a straw man. Get off it. You should pay attention to your own statement that “Two years is a very short period of time.” As has been emphasized again and again, global warming is a decades-to-century effect, a 24-month timeline is just your lame excuse to justify uncertainty and doubt.]

    • Doug,
      Thank you for continuing to demonstrate the lack of insight we’ve come to expect from the denialist idjits. I am just curious–why is it so difficult for you folks to actually try to understand the system you are commenting on? It is a system with a small roughly linear trend and a larger source or sources of noise. Given that, this is simply the sort of behavior you expect.

      You speak as if the temperature trend were the only evidence for the consensus model. You utterly ignore the melting ice, the changing seasons, the extreme weather, etc., all of which are either in line with or more severe than the predictions of climate models.

      You speak as if a failure to smash all previous records will “falsify” the consensus model and the role of CO2 in it. Nope. It will show something is missing in the model. It will provide an opportunity to improve the model. If you want to overturn the model, it is very simple. Propose a better model. “Anything but CO2” is not a model. It’s a cop out.

      So, no matter what happens in the next 24 months, Tamino will still be a good man, an excellent statistician and a great stats teacher. Anthony Watts will still be an idiot.

      Happy New Year, Schmuck.

  40. Are there any real skeptics as opposed to fake denialist ones? As a relative newcomer to the field it would be nice to get both sides of the argument. Who can be recommended as fairly arguing for the other side in a well informed way, even if we don’t agree with them?

    [Response: Maybe. I don’t know who they are.

    Question for you: are there any real skeptics of the notion that smoking cigarettes causes lung cancer? Who can be recommended as fairly arguing for the other side in a well informed way?]

    • Maria: “Who can be recommended as fairly arguing for the other side in a well informed way, even if we don’t agree with them?”

      Maria, that is a much tougher question than it sounds like at first glance. The problem is that there is a vast gap between what “skeptic” scientists say in their scientific publications and in their public pronouncements (e.g. op eds in the Wall Street Urinal, testimony before Congress, etc.). The former differ little from the mainstream–they admit we are warming the planet, but perhaps dispute the degree. The latter attack the mainstream scientists bitterly and regurgitate standard denialist talking points. That right there ought to tell you something. The mainstream scientists are quite consistent in what they say–their talking points are supported by their research.

      If I had to recommend one “skeptic”, it would probably be Roy Spencer. Dick Lindzen has demonstrated that he is utterly disingenuous. John Christy seems to be out in lala land. Aunt Judy…well, she’s just pathetic.

      I’m afraid, Maria, that if you look at this in an honest fashion, you will have to admit very soon that there simply aren’t two sides to this argument in any scientific sense.

      Now as to what to do about the problem–the debate there is a whole helluva lot more active. That’s politics. The science is beyond dispute.

  41. What is happening here with all this repetition of the phrase “fake skeptic”?

    If someone disagrees with your viewpoint they are ‘skeptical’ of it, and that does not change, whichever party is right or wrong.

    IMHO its repeated use in no way enhances the debate.

    [Response: IMHO the term “fake skeptic” is so correct, and hits the mark so precisely, that fake skeptics cry “foul” hoping they can draw attention away from the truth.]

    • They’re called ‘fake skeptics/sceptics’, xmarkwe, because theirs is a one-sided scepticism that rejects out of hand any evidence that doesn’t support their pre-conceived point of view. Conversely, when ‘evidence’—real or manufactured—appears to support their point of view, they swallow it hook, line and sinker with no scepticism. Genuine sceptics—which should include everyone who calls themselves a scientist—question the validity of every piece of evidence, whether it supports their stance, or not.

      The consensus in support of climate change is based on the accumulation of numerous pieces of evidence that make up an unfinished jigsaw. Although there are still many pieces missing, blurred or incomplete (the ‘uncertainty’), enough of a picture has already been revealed to create a convincingly coherent ‘big picture’. Finding a piece of the jigsaw that doesn’t seem to fit the ‘big picture’ means the piece is most likely faulty in some way and is put to one side pending future developments. Theoretically, if enough faulty pieces are found to change the overall picture into a different picture, then so be it — the scientific consensus will change to accommodate the accumulating new evidence. Fake sceptics however will jump on a ‘faulty’ piece of evidence and crow about it being proof the ‘big picture’ is wrong. Even if in the very unlikely event that time proves them right, one apparently contradictory piece of evidence is a long way from adding up to the alternative ‘big picture’ that the fake sceptics are so desperate to find.

      I hope that helps.

  42. Philippe Chantreau

    xmarkwe you are profoundly mistaken. Not all is relative. When physical reality is involved, a viewpoint does not have merit just because it exists. Some are skeptical that the Earth is round and their viewpoint disagrees with mine on that matter. They are not skeptic in the scientific, or just even logical, sense of the word. They are not even fake skeptics. They are fruitcakes. Their viewpoint is of no interest whatsoever. It does not withstand the slightest scrutiny. It’s not even worth bothering.

    Some people argue that it hasn’t been warming in the past 16 years (an example among a myriad other arguments), which is a meaningless, deceitful argument, for well known reasons that can be discovered in a few minutes of research. They are fake skeptics of global warming. They do not apply true intellectual enquiry methods that yield conclusions that are independent of their preferences. They go at immense length to reach conclusions that suit their preferences, sacrificing any true skepticism in the process. Yet they call themselves skeptics. Fake skeptics happen to be the most appropriate, most accurate way to describe them, and it takes only two words. Their viewpoint is of no interest either.

    The fact that their viewpoint exists does not lend it any validity. It really is that simple.

  43. Horatio Algeranon

    Climate Change “Skeptic” Sighting
    – by Horatio Algeranon

    I sighted a Climate-Change “Skeptic,”
    A breed as rare as Sasquatch,
    I saw him at the edge of the woods,
    In a nearby pumpkin patch.

    I knew he was a “Skeptic,”
    The moment I glimpsed his shirt,
    Emblazoned with Al Gore’s likeness,
    And two words: “Red Alert!”

    I heard him yell across the patch,
    “Al Gore is a big fat liar”,
    “The nanny government should leave us alone”,
    “All Hail to McIntyre”.

    This startled the wife and kids.
    As you can well appreciate,
    It even unsettled me a bit,
    An’ I’m not easy to intimidate.

    When I noticed the broken hockey stick,
    The man had been chewing on,
    I decided maybe we ought to leave,
    Make hay, high tail, be gone.

    I added him to my “Life List”,
    When I got home at night,
    Obsessa hockeysticka,
    My friends’ll be jealous, all right.