Now that I’ve had the chance to study some of the other papers from the Berkeley team, I’d like to offer my thoughts on Decadal Variations in the Global Atmospheric Land Temperatures (Muller et al. 2011). But first, I’ll comment on another comment about that paper. It has already come under attack from the WUWT crowd, specifically from Doug Keenan. His criticisms are neither valid nor objective, in fact in my opinion his comment amounts to ignorant sniping of the worst kind.
His objections are mainly twofold. First, he objects to the application of a 12-month moving-average filter. He even goes so far as to quote Matt Briggs
Unless the data is measured with error, you never, ever, for no reason, under no threat, SMOOTH the series! And if for some bizarre reason you do smooth it, you absolutely on pain of death do NOT use the smoothed series as input for other analyses!
Keenan concludes “This problem seems to invalidate much of the statistical analysis in your paper.”
I myself have emphasized the problems with smoothing since it introduces artificial autocorrelation into the result. And I have pointed out the folly of analyzing a smoothed series without compensating for the smoothing (see for example here). What seems to escape Keenan (or he chose to ignore) is that Muller et al. do account for the smoothing in their analysis. Instead of applying statistical tests based on an error model for the data (in particular, a white-noise error model which we already know isn’t right), they do their statistical testing by Monte Carlo simulations (in which one generates a large number of artificial signals with the same basic properties in order to define their response to a given analysis). There is simply no merit in this criticism of Keenan’s.
His other main objection is the “statistical model” used. He first states that “most research on global warming relies on a statistical model that should not be used.” He’s referring to the AR(1) (1st-order autoregressive) model. I myself have emphasized the inadequacy of the AR(1) model for temperature time series, and Keenan even supports his assertion by referencing one of my papers (Foster et al. 2007)!
Alas, his reference is not really relevant. We did not show in that paper that an AR(1) model is inadequate for the noise in temperature time series, we showed that it is inadequate for the residuals from a century-scale linear fit. But as I said, I quite agree that an AR(1) model is inadequate for temperature time series — not only is it not correct, it’s not even a close enough approximation to give valid answers.
But Keenan is mistaken when he claims
Although the AR(1)-based model is known to be inadequate, no one knows what statistical model should be used.
On the contrary, I have strong evidence that for temperature time series, an ARMA(1,1) model is adequate. It’s almost certainly not exactly correct (all models are wrong, but some are useful), but it’s a close enough approximation to give valid answers. As a matter of fact I have recently submitted a paper for publication based on this post in which I show that the AR(1) model is inadequate, and argue that the ARMA(1,1) model is the right approach (at least until I see a better one) for trend analysis of temperature time series.
But the salient point is that Muller et al. (2011) does not depend on any statistical model at all. As I said before, they use Monte Carlo simulations for all their statistical testing, and don’t end up making any assumption about the appropriate model for the noise in their data. But Keenan says
BEST did not adopt the AR(1)-based model; nor, however, did it adopt a model that deals with some of the complexity that AR(1) fails to capture. Instead, BEST chose a model that is much more simplistic than even AR(1), a model which allows essentially no structure in the time series.
The implication is that Muller et al. (2011) assumes a white-noise model, which as I’ve already said, is not the case.
Keenan also describes an email exchange on this issue with Doug McNeall at the Hadley Centre, and says of McNeall
He still believes that the world is warming, primarily due to computer simulations of the global climate system.
I very much doubt that’s true. I’m confident that McNeall believes the world is warming, not based on computer simulations but on observed temperature data, the disappearance of glaciers worldwide, the migration of species, changes in the timing of seasonal cycles in the biosphere, disappearance of Arctic sea ice, tremendous mass loss in the Greenland and Antarctic ice sheets, increase in atmospheric humidity, etc. etc. This seems to me to be nothing more than Keenan’s attempt to imply computer simulations are useless (which they aren’t), and that global warming science depends on them (which it doesn’t).
In a genuine example of irony, it seems that even Anthony Watts and his cohorts now say that they believe the world is warming. The ironic part is that the Watts crowd wants us to believe that they never doubted it! All of which is rather effectively belied by quotes from a “paper” on which Watts is first author — not to mention what has consistently oozed from Watts’ blog for years.
Keenan only offers “spin” on Muller et al. (2011). What’s probably most offensive is the hubris which permeates his post, the insultingly condescending tone with which he implies the Berkeley team should go “back to school” and even suggests introductory textbooks on time series analysis.
Hubris and condescension are what we’ve come to expect from posts at WUWT. The only real surprise is just how fervently WUWT is attempting to smear the Berkeley team, their results, and Richard Muller now that the Berkeley team has had the audacity to contradict what Watts & Co. have been asserting for years. The Berekeley group have become WUWT’s current favorite target of ridicule. What a strange reaction from a blogger who originally declared he would “accept whatever result they produce, even if it proves my premise wrong.”