# Tisdale Fumbles, Pielke Cheers

Bob Tisdale has done it again. The guy who thinks that “eyeballing” the correct lag and scale factor for fitting time series is better than multiple regression now comments on variability in climate models, not using climate models but using the multi-model mean.

Of course Anthony Watts regurgitates. Worse yet, Roger Pielke Sr. not only endorses Tisdale’s “analysis,” he actually suggests “I also urge Bob to submit this analysis to a peer-reviewed research journal so it can be assessed by the entire climate community.”

The heart of Tisdale’s “work” is a claim he emblazons in red on his graphs (six times no less):

The Models Do Not Produce Multidecadal Variations In Sea Surface Temperature Anomalies Comparable To Those Observed

Apparently Tisdale just doesn’t understand that when you average across a large number of models, you wipe out most of their natural variability. If you then look for variability in that multimodel mean you won’t find it. Tisdale really fumbled the ball — again.

Allow me to illustrate.

First let’s look at some artificial data. We’ll start with the global average temperature from GISS and smooth it:

Let’s pretend — just for illustrative purposes — that the smooth curve is the actual (forced) climate and the deviations from it are natural variation. Let’s also pretend that we have a computer model which simulates the climate with near perfection. It get’s the forced variability right, and simulates the natural variation correctly as well. We run it 10 times, and it gives us these 10 simulated temperature histories:

Pretty damn good model, right? It should be — we designed it to be correct.

Now let’s take the 10 model runs and average them to compute a multi-model mean:

That is not a good simulation of temperature history. It gets the forced variability right (because all the individual models do) but it gets the natural variability wrong — because even though the individual models simulate this well, when we average many of them the averaging process wipes out most of the natural variability.

Tisdale actually looks at changes in 30-year trends and 17-year trends for sea surface temperature data. He compares the result from actual data (HadISST1) to the multi-model mean from IPCC AR4, and concludes that the models fail to reproduce sufficient multidecadal variability. We can do the same with our toy model of temperature history. First let’s compare the 30-year trends from the individual model runs, to that of the actual data (from GISS):

Note that not only do the models give good results (within the error limits), they also show about the same amount of natural variability as the actual data. Now let’s compare the actual data 30-year trends to those from the multi-model mean:

Note that the overall evolution is good — since these models are designed to get the forced variability exactly correct — but the natural variability shown by the multi-model mean is much less than that in the actual data. That’s because the process of averaging many models wipes out the natural variability which is shown — correctly — by the individual model runs.

Clearly, Bob Tisdale didn’t get it. Clearly, neither did Roger Pielke Sr.

Tisdale used HadISST1 for estimated sea surface temperature and the AR4 multi-model mean to characterize “the models.” He got this for 30-year trends over time:

Using the same data, I get essentially the same thing (but I won’t bother showing the projection into the next century, just the observed time span):

Indeed it’s true that the multi-model mean shows less variability — on both interannual and multi-decadal time scales — than the observed data. That’s because the process of averaging wipes out the natural variability. Tisdale didn’t get it. Neither did Roger Pielke Sr.

Let’s compare the observed HadISST1 data to some actual model runs (rather than a multi-model average). Here’s the result of 30-year trends for 9 runs of the GISS-ER model, together with the AR4 multi-model mean:

Note that the individual model runs show much more variability than the multi-model mean. In fact they show variability comparable to that shown by the observed data.

There are definitely problems with the models. For one thing, they don’t reproduce the rapid warming of sea surface temperature from 1915 to 1945 as strongly as the observed data indicate. But overall they’re not bad, and the amount of natural variability they show is realistic.

Certainly the models need more work. Certainly they should be examined with a critical eye. Just as cleary, Bob Tisdale is not the right person to do this. Neither is Roger Pielke Sr.

### 45 responses to “Tisdale Fumbles, Pielke Cheers”

1. There’s something about those graphs on the internet that look like the product of Excel, and have lots of red writing all over them….they are always nonsense.

2. Martin Smith

When I read your explanation, the mistake is obvious, so I immediately want to ask, Did Tisdale and Pielke Sr. not see it, or is there another definition of variability they are using?

• LazyTeenager

Bob seems to be using 2 definitions of variability.
1. This article relates to the derivative of the long term trend. And contrary to what Bob says my eyes say the match between model and observed is ok except during the early years.Yeah pick me up at boronia
2. I have also seen a Bob article which shows the short term climate variations for both model and observed. Again my eyeball says there is a good match contrary to Bob’ exposition. Basically I have no idea what he was talking about.

3. Even the variation within three SST datasets (HADSST2, HADISST and HADSST3) is quite significant compared to the deviation of HADISST from the model mean.

I wrote another post here showing the overall pattern of trends for the observed and model average SST datasets.

4. TomG

I’m of the opinion that Tisdale knows exactly what he is doing.
The object of this mishmash is to create doubt.

5. What was basis for selecting the particular 9 model runs that you selected ?

The mean of the “Tamino” ensemble is a very good match for the observed trends while the mean of all model runs is quite different.

• Charlie, if I had to guess I would say he used the archived CMIP3 runs.

6. Mark S

Two comments: 1) Watts site hasn’t been concerned about the truth or even being skeptical for some time now. They are just concerned with publishing as much bs as possible that supports their memes (in this case the ‘models suck’ meme). 2) The suggestion by Pielke Sr.that Tisdale submit a paper on this topic for peer review is a fantastic idea. There is no way it would be accepted and it would take Tisdale away from writing more ridiculosities like this, at least for awhile.

7. DrTskoul

Well let him send it to a peer-review journal. If he gets shot down like he should then they will call “conspiracy”!! How do you refute massive hallucinations??? these people only see what they believe and nothing else. Tamino, is there a mathematical proof to the reduced variability other than the model runs?? Please post it as widely as you can if there is… Thank you for your time spent on such good work

8. MMM

“There is no way it would be accepted”

Absent being submitted to E&E, Energy & Fuels, Modern Physics A, Journal of American Physicians and Surgeons, or any of those other non-climate journals that publish contrarian droppings.

Of course, if it gets rejected from, say, Climatic Change, the Watts-machine would cry “referee bias!”

9. MapleLeaf

Pielke is is denial. He has also abandoned rigorous scientific and statistical analysis a long time ago. No wonder Skeptical Science had a field day with Pielke.

Tamino, I mean….. Ooh Pielke outed you, what I clever man he is. And he thinks this is a big revelation ;)?

Tamino, thank you so much for doing this. Pielke has no idea how to conduct time series analysis properly, nor does he understand statistical significance. This only reinforces that pathetic point.

Pielke is now in the same shameful, disgraced league as Fred Singer, I bet his son is mighty proud.

• Pielke has no idea how to conduct time series analysis properly, nor does he understand statistical significance.

While the easy answer would be to agree with you, MapleLeaf, the perception I have formed (via long observation of his writings and his, um, “correspondence” with Skeptical Science) is that the man knows exactly what he is doing…

• Pielke reminds me of Poe’s Law. I honestly can’t tell if he doesn’t know how to do proper analyses, or if he does know but chooses to do them incorrectly.

It’s especially difficult because he’ll do something ridiculous, like cherrypicking 1998 as the start point for a TLT analysis, then when we point out his error, he’ll eventually admit that it was a poor choice, but then he’ll continue to do the same sort of cherrypicking and similar erroneous analyses. So it’s difficult to tell whether he’s just being dishonest, or whether he doesn’t really understand the errors he’s constantly making.

• Pielke praises a tautology as scientifically significant. So, yes, he appears to know exactly what he’s doing.

• What you have to understand is that WUWT is Pielke’s site, just that he hides behind Watts to maintain some semblance of scientific face.

• MapleLeaf

Can the good rabbit elaborate please, because if true that would be alarming. Is that perhaps why Pielke believes that WUWT and Watts are “devoted to the highest level of scientific robustness”?

Has anyone ever directly asked Pielke about his exact level of involvement with WUWT? Some evidence supporting your claim would help. I suspect the same as you, but I have no hard evidence.

10. MapleLeaf

How sensitive are the results to the SST data used. Nick’s work suggests that even the various Hadley Center analyses differ amongst themselves. How about ERSSTV3?

11. MapleLeaf

Others suggested E&E, might I suggest JSE to Bob Tisdale and Pielke, they might just publish it ;) Just kidding.

If Pielke thinks Bob’s work is so “great” I hope he insists on being co-author and having his name directly associated with this “great” work….Pielke should put his money where his mouth is and submit this to the Journal of Climate, a real journal.

• cynicus

The Dog Astrology journal? Fitting…

• MapleLeaf

“The Dog Astrology journal? Fitting…”
Yes cynicus :)

12. It is really amazing to what extents people would go. I think this is borderline schizophrenic!! There is no easy way to counter these people. SkS “Debunking Handbook” has some useful pointers…

13. MMM

I love this logic:

“Why doesn’t Tamino use the real models instead of artificial ones? Because then Tamino would have to show you that the majority of the models do not have multidecadal variations in trend that are similar in timing, frequency, and magnitude of the observation-based SST data. ”

Um. Yeah. Only _some_ of the models match timing AND frequency AND magnitude. Which is actually pretty impressive… I wonder if you took a given model and ran it once, and then reran that identical model with a set a new initializations, how many model runs it would take to match the original timing, frequency, and magnitude of variability? I’m guessing about as many as the ensemble that Tisdale shows…

14. CM

Tisdale and Pielke agree that it’s Tamino who’s missed the point, and that the point really is obvious, they just don’t quite seem to agree what the obvious point is. According to Pielke Sr, the point is that the models suck at providing “realistic climate projections for impact assessments decades into the future”. Meanwhile, Tisdale says the point is multi-decadal trends. Silly Tamino thought his point was that “The Models Do Not Produce Multidecadal Variations In Sea Surface Temperature Anomalies Comparable To Those Observed…”. But it ought to be blatantly obvious to anybody reading Tisdale’s post that this was not the point this time. Why? Because Tisdale had printed it. In big red letters. On each and every graph. To direct reader’s attention away from it. That’s how much not the point it was. That’s the point. I think. Except that Tisdale is stating this explanation very forcefully, so maybe the point is the opposite one. I think I need to sit down now.

• Arrogance for promoting such a ruse!!! Arrogance since they think such a blatant attempt will survive inquiry. Trying to say that the models cannot capture the multi-decadal variations or the trend because the have not been initialized to do so. Strawman and Ruse.

15. cynicus

Tamino, it’s just so sad that you actually had to continue explaining Tisdale’s stupidity error beyond the third paragraph.

I do wonder, Roger Pielke Sr. is supposed to be a climate scientist, right? And he is unable to recognise the really obvious flaw in Tisdale’s argument? No wonder ‘skeptics’ hold climate scientists in such low regard…

16. Tamino says in the main post:

“There are definitely problems with the models. For one thing, they don’t reproduce the rapid warming of sea surface temperature from 1915 to 1945 as strongly as the observed data indicate. But overall they’re not bad, and the amount of natural variability they show is realistic.”

There are always problems with the models and improvements can always be made. But It’s possible that the lack of warming from 1915 to 1945 may not be one of them.

With regards that Arctic warming in the 1930s/40s: Knight 2008 finds that the 1930s/40s AMO peak does not appear in multi model means – suggesting it’s not a forced response. While Johanessen et al 2004 find that the high latitude warming pattern of the 1930s doesn’t appear in any individual model ensemble members, although similar warmings occur at other times. So 1930s/40s warming in the Arctic, which appears to have been a response to the AMO (e.g. Chylek et al 2009) seems to have been a result of internal climate variability, not a forced response and so shouldn’t be expected to appear even in individual model runs. I could be that something similar was going on with the global average of HadISST1.

Regretably the GISS maps using Had/Reynolds SST only don’t have enough resolution to confirm or refute my suggestion (much of this graph is white): GISS Maps. But they do show hot spots in the North Atlantic and Pacific, suggesting AMO and ENSO respectively, neither of which are forced modes so couldn’t be expected to appear in the multi-model mean, nor even in individual ensemble members.

Chylek et al, 2009, “Arctic air temperature change amplification and the Atlantic Multidecadal Oscillation.” PDF.

Knight, 2008, “The Atlantic Multidecadal Oscillation Inferred from the Forced Climate Response
in Coupled General Circulation Models.” PDF

Johanessen et al, 2004, “Arctic climate change: observed and modelled temperature and sea-ice variability.” PDF

• WheelsOC

It’s interesting that this pops up because I was trying to discuss the North Atlantic/Arctic warming over that time period in another forum today. The paper I turned to was Polyak et al. 2009’s study of arctic sea ice extent (PDF). The explanation for a warmer early 20th century in the region that I took away from this was a strong Atlantic inflow to the Nordic seas to explain the divergence between ice extent in that area and extent for the arctic as a whole (fig. 12).

• Paul S

The warming over 1920-1945 is dominated by SST changes, which is also suggestive of a non-forced response. Otherwise you would expect land warming to be greater.

I find you can get better spatial patterns looking at trends instead of anomalies on the GISS site. I think there are actually two semi-discrete warming patterns. The first, a fairly steady change from 1920-1935, is primarily a North Atlantic phenomenon.

The second is a sudden change appearing as a dome on temperature graphs (particularly for Southern Hemisphere SSTs) and extending to 1944. Warming is far more heterogenous but isn’t apparent in the North Atlantic, which is fairly stable over this period.

Of course these patterns are partly subject to change with the release of the new HadSST3 dataset.

• Glenn Tamblyn

Paul

See my comments below to EFS_Junior. The paper by Thompson et al is interesting in this regard wrt to the pronounced hump in the 40’s, being able to pin down the end of it to Aug 1945 and attributing it significantly to measurement issues rather than a real warming.

It will be interesting to see whether HadSST3 has included a correction for this.

• Glenn Tamblyn

Also Paul, take a look at this graph from HadSST3, looking at the breakdown of what type of data sources were used at different points in time. http://www.metoffice.gov.uk/hadobs/hadsst3/figures/part_2_figure_2.png. If that flip-flop between 1940/45 isn’t a source of bias, what is. Also the 50’s/60’s

17. EFS_Junior

I’m a little curious about that time period between say 1945 and 1950.

As I thought there had been some discussion (in the recent past) about SST’s during that time period and a bias offset that need to be corrected based on calibration errors (or some such)?

Also, given that some GCM’s (or most) don’t explicitly account for ENSO variabilities, we would not expect those models to somehow capture ENSO type events, in the first place.

Given that all GCM’s are deterministic BC/IC problems, what period were these models calibrated against? What time period were these models verified against?

Standard practice has always been the two step calibration then verification approach.

Is the lack of agreement with the early time series primarily due to insufficient forcing data (e. g. less complete observational data in the past versus the more complete observational data in the present (or recent past))?

Finally, I don’t expect any GCM to perfectly match future (or even past obesrvational time series, less the calibration and verification steps mentioned above and the quality of past observational input data) observational time series in the strict deterministic sense, but I would (or do) expect GCM’s to match observational time series in the strict stochastic sense (e. g. spectral moments).

However, there is no gettinig around the fact that Tisdale used mean GCM data (an average of several separate GCM simulations) versus one obsrvational realization (IMHO there can only be one observational realization, the one that actually happens, regardless of what our observational measurement systems were, are, or will be)), a very major no-no for anyone with even a basic understanding of statistics. Something Tisdale quite obviously lacks, a very basic understanding of statistics.

• Glenn Tamblyn

EFS_Junior
One area where there is believed to have been a ‘bias problem’ is in eary measurements of SST’s and how they were done. Back then it was all from samples taken by ships and 2 methods were used – throw a bucket over the side and measure the water in it, or measure the water temp in the engine coolant intake. Obviously both methods will have their own bias. But a bias that dosn’t change doesn’t alter a trend. However if the mix of ships using the two methods changes, that is a bias. What has been identified is a substantial increase in the percentage of US ships taking the samples rather than UK during the war years. Then this reverses sharply in Aug 1945. And this change is visible in the SST data.

Second point, made by Wheels OC is the warming that occured in the Arctic during the 30’s and early 40’s. It wasn’t a global pattern.

An additional factor that could also distort the period is the question of when land stations began reporting in the polar regions. Before the 30’s there was basically no data. Then more stations came on line in the arctic, firstly as the Soviet Union started putting stations into its northern coastline, and then during the war with readings from Northern Greenland. Antarctica didn’t start being measured until the 50’s.

Remeber that the methods used by the HadCru series as compared to GISS essentially exclude the high Arctic even now and that this means that HadCru is likely underestimating the warming slightly. In effect what we see here is this bias reaching further south in those early years and applying to GISS as well. Then as the station coverage grows over those following decades this is effectively a bias in the record. That this transition happened to occur at the same time as the changed current flow described by WheelsOC above may have been a significant coincidence. Cuold the fact that station coverage in the north was changing just as the climate up there was flipping about have made the change seem more significant than it would have if we had full coverage over the entire period.

An additional conjecture that has been expressed about the polar regions is that their climates go through an oscillation on a roughly 60 year cycle with North then South warming and cooling. Unfortunately if this did occur then, although we have reasonable data for the North from the 30’s, we don’t have the corresponding data for Antarctica till the 50’s so we can’t tell if Antarctica was cooling.

So an unanswerable question is just how much of the warming during the 30’s and 40’s was real rather than an artefact of changing instrumentation.

18. steven mosher

Does this put me in good company? WUWT commenters note:

“Tamino’s post at

http://tamino.wordpress.com/2011/11/20/tisdale-fumbles-pielke-cheers/

is really funny. He tries to debunk Bob with Mosher’s multi-model argument.”

I really wouldn’t call it “my” argument, but thanks for doing this post Tamino.
saved me the trouble

19. G Swarzie

So how do you square the fact that models indeed do not fit data particularly well, especially data that has come in since the model was frozen?

[Response: How do you know models don’t fit data particularly well? Have you looked at all the observed data sets for sea surface temperature? For global surface air temperature? Global, hemispheric, even regional behavior? Have you studied the individual model runs, as well as the multi-model mean? Do you know which models incorporate which climate components, and how? Do you even know what they fit well and what they don’t? I’m skeptical.

There are so many models of such variety, and so many observed data sets for so many climate variables with such differences between them, that it’s easy to find a discrepancy between models and observations. What takes a lot of work and great care is to compare all aspects of the models to observed data for many phenomena, then form a dispassionate appraisal of their successes and failures. That’s how science progresses, but it seems to be the one thing people like Bob Tisdale won’t do — he’d rather find anything he thinks is a discrepancy and conclude that all the models are useless.

If you really want to know the truth of the matter, study the IPCC reports. Carefully.]

20. Steve Bloom

Chris R., the point probably can’t be called settled, but recent research points to increasing Agulhas Current “leakage” into the Atlantic (a direct consequence of global warming) as the major factor in recent North Atlantic warming and the apparent recent AMO signal.

• CM

Steve, can you spare a reference? I’d like to read up.

• Steve Bloom

Sure, CM. See here and here, and here for a representative description of some early consequences.

While it’s clear that some is getting into the Arctic Ocean, what it’s doing there has yet to be characterized (AFAICT); could it be the cause of the East Siberian Shelf warming?

In case it’s not obvious, I should mention that what’s driving this change is the southward movement of the Antarctic Circumpolar Current (which reduces the amount of Agulhas water retroflected back into the southern Indian Ocean), which in turn has been driven by the southern shift of the westerlies, which is part of the poleward compression of the atmospheric circulation driven by the expansion of the tropics, itself a direct consequence of atmospheric warming. That the Agulhas (which originates in the equatorial zone off East Africa) is also getting warmer only makes things worse.

All of the foregoing is very recent science, unknown at the time the TAR was published, barely mentioned in the AR4 and apparently not included in the GCMs (yet another model failure, although I think we can expect that this is one the denialists won’t be mentioning). Also of interest is the probable role of the Agulhas in terminating the glaciations, although that will need to be confirmed by modeling.

The main lesson I take from this is that it demonstrates the unique confluence of factors, some of which we are in the process of disrupting, that were necessary for the planet to fall into Pleistocene glacial conditions. Hello neo-Pliocene, next stop the neo-Miocene!

• CM

Steve, thanks, this is very interesting.

21. @G Swarzie.

No model has been developed to simulate and quantify, everything!! Each one have their own strengths and weakness is depending on the physics included, assumptions, simplifications, etc. Why else would we need so many different ones??? You need such a variety to understand which particular physical phenomena are more important. To answer a certain class of questions (e.g. long term trends) you do not need to capture the very short scale with accuracy. Same way, very short term weather models, that are geared towards accurate predictions within that short term window, completely fail if extrapolated further in the future. Why do you think is that? A bar with 1 meter length can be used to measure distances with 1 meter accuracy..no more..no less…Get used to it.

22. “Certainly the models need more work. Certainly they should be examined with a critical eye.”

Anyone seem to be making more progress than others in this regard?

23. No more, check. . . but less accuracy is always a possibility!

24. MapleLeaf

Glenn @| November 21, 2011 at 7:40 am,

Re Fyfe et al. (2011). The final sentence their abstract is especially interesting,
“The ENSO signal, which is skillfully predicted out to a year or so, has little impact on our decadal trend predictions, and our modelling system possesses skill, independent of ENSO, in predicting decadal trends in global mean surface temperature.”

Why is Pielke senior not drawing attention to this paper on his blog? He seems very quick to present the findings from papers to give the impression that models have no value by making claims like,
“The Huge Waste Of Research Money In Providing Multi-Decadal Climate Projections For The New IPCC Report”

That headline on his blog is accompanied by an image of someone emptying money in a toilet. Oddly, Pielke is quick to praise modelling studies that support his point of view. So he does agree that models are useful and have skill, but only when they support his beliefs ;)

• Dikran Marsupial

I suspect the reason that ENSO doesn’t have a great effect on the decadal trends because the decadal trends haven’t been deliberately cherry-picked to maximise the effect of ENSO. Most of the time the cyclic variation will cancel out moderately well even on a decadal basis, and only occassionally doesn’t. So if the decadal boundaries are “randomly” chosen, ENSO will have a much lower influence than when theyy are cherry-picked.

25. MapleLeaf

Undeterred, Tisdale has now moved the goal posts. Pielke continues to cheer without thinking. It is all very pathetic and transparent. Pielke is making quite the hobby of playing loose with the facts.

http://www.skepticalscience.com/pielke-sr-misinforms-high-school-students.html