Much of what’s wrong with the online discussion of global warming is revealed by a recent reader comment on RealClimate.
Greg Goodman thinks that he’s taking climate scientists to school — he actually “lectures” the RealClimate readership about their supposed need to “dig a bit deeper” into the data on Arctic sea ice (both extent and area). He shows a graph based on some analysis which — unbeknownst to him — actually reveals that he doesn’t know what the hell he’s doing. He thinks he has established the presence of “cyclic variations” of which the climate science community is ignorant, and concludes that climate scientists are missing “important clues” about “internal fluctuations” which, of course, those inadequate computer models just can’t handle.
One would be hard pressed to find a more clear-cut example of hubris.
Climate scientists who study sea ice have been all over the data, every piece of it, but instead of making the mistakes Goodman makes they’ve been as careful and rigorous as their expertise and experience allow. They have certainly dug a whole helluva lot deeper than Greg Goodman has, or probably is capable of. It’s Goodman who needs to go back to school.
Goodman links to this graph:
Then he says this:
We only hear about run-away melting in Artic ice but looking at all the daily data available for over 30 years now we also see there are strong “unforced” cyclic variations up there too.
This plot is rate of change so the zero line represents neither loss not gain.
The media tend to focus on one day per year in September. Scientists need to dig a bit deeper. What is happening up there is a lot more interesting than simple melting.
Thorough investigation of the daily satellite ice data could give important clues to a better understanding of the “internal” fluctuations in the climate system: the part that, so far, models have trouble reproducing.
According to his own graph, here’s what he did. He took sea ice anomaly values (both extent and area), differenced them to estimate the instantaneous rate of change, then applied a Gaussian filter to smooth the result. It seems quite similar to what he does in this blog post. Let’s take one of the data sets he uses, daily data for Arctic sea ice area from Cryosphere Today, and try to reproduce the procedure. Letting be the area anomaly, I first estimated weekly averages in order to make the calculations go faster without affecting the response at time scales under consideration. Then I differenced the data to estimate the rate of change , then applied a 360-day Gaussian filter and a 180-day Gaussian filter, giving this:
It’s not exactly the same as Goodman’s result — for one thing, I didn’t cut off the first and last half-intervals or so like Goodman did — but it’s pretty damn close, I figure I’m on the right track so far. The 360-day smooth certainly does show, to the eye, the apparent “5.42y cyclic variation” which Goodman seems to regard as meaningful. Let’s focus our attention on the 360-day Gaussian filter result, and on that apparent “unforced cyclic variation.”
I can even subject the result to Fourier analysis, which gives this:
That peak just to the left of frequency 0.2 cycles/year corresponds to a period 5.4 years. If we assume the noise in these data are white noise, then it’s statistically significant. If we assume the noise is red noise, it’s still statistically significant. Here, by the way, is the amplitude spectrum (rather than power spectrum) for this data set (the power spectrum is proportional to the square of the amplitude spectrum):
Isn’t the peak at period 5.4 years pretty convincing evidence cyclic variation?
No. It’s not.
The noise in this data set isn’t white. And it isn’t red. For lack of a better term, I’ll call it “green” noise — because it doesn’t give a flat spectrum, it doesn’t emphasize low frequencies or high frequencies, it emphasizes medium freuencies.
Let’s compute the spectrum of the actual data — sea ice area anomaly. It looks like this (click the graph for a larger, clearer view):
It’s dominated by very low-frequency response. This isn’t due to periodic (or pseudoperiodic) behavior, it’s because of the long-term trend. The dashed lines show 90%, 95%, and 99% confidence limits in the presence of red noise. There is one significant pseudo-periodic frequency, very near 1 cycle/year. This is because the spectrum of the area data (not area anomaly) is dominated by a 1-cycle/year fluctuation (the seasonal cycle) but that cycle isn’t constant, it shows year-to-year fluctuations as well as amplitude modulation (the size of the annual cycle has increased). So the spectrum of the area data shows broadband response near 1 cycle/year, and that of the anomalies shows residual broadband response after the “average annual cycle” has been removed.
It’s easier to look for other influences in an amplitude spectrum:
The dashed red line shows frequency 1/5.4 cycle/year, and rather clearly, it’s not exactly meaningful.
This is what the data show.
How, then, does that 5.4-year period pop out in the Fourier spectrum of the filtered velocity data? There are actually two filters applied to the data. One is a derivative filter, which transforms anomaly to anomaly velocity. This has two effects. For one thing, it multiplies the Fourier amplitude by the frequency, which tends to exaggerate high frequencies while suppressing low frequencies. For another thing, most Fourier analysis programs (mine included, and Greg Goodman’s too I’ll bet) de-mean the data before analysis, i.e., they subtract the average value. Subtracting the average value from the velocity data is equivalent to subtracting the linear trend from the original data, i.e., de-trending (linearly) the data. So, the Fourier spectrum of the derivative-filter data is the frequency multiplied by the Fourier transform of the de-trended data.
The second filter is a Gaussian smooth. This is a mathematical operation known as convolution. If you know your Fourier analysis, you know that the Fourier amplitude of the convolution of two functions (in this case, the velocity data and the Gaussian filter) is equal to the product of their Fourier amplitudes.
What happens if we de-trend (linearly) the anomaly data (not velocity!), then compute its Fourier transform (amplitude spectrum), then multiply by the frequency, then multiply by the Fourier transform of a Gaussian filter (which happens also to be a Gaussian). The multiplier itself — frequency times a Gaussian (which is the factor by which the Fourier amplitude is attenuated) — looks like this:
Note that it has little effect on frequencies near about 0.2 cycle/year, but practically eliminates both low and high frequencies. When we multiply that by the Fourier amplitude of the de-trended anomaly (not velocity!) data, then compare that to the amplitude spectrum we got for the velocity data itself, we see that they’re very similar:
Here’s what happened. The Fourier transform of the data does not support the existence of a 5.4-year cyclic variation. In fact it shows nothing at all near frequency 0.2 cycles/year except the ubiquitous ups-and-downs that all Fourier spectra show. But when we kill everything not near about 0.2 cycles/year, we’re left with a spectrum with a single tall peak at frequency 1/5.4 cycles/year (period 5.4 years).
In case you’re wondering why the “convolved spectrum” is noticeably different from the spectrum of the velocity data, that’s because what I just plotted is the product of a continuous convolution with the spectrum of the original (not velocity!) data. But the data are discrete, not continuous, so we should really compute the discrete convolution and the discrete spectrum. If we compute the discrete spectrum of the velocity data (which means, only at the “Fourier frequencies” rather than oversampled), and compare that to the “convolved spectrum,” we see that they’re the same at the Fourier frequencies:
No, there’s no evidence of a 5.4-year periodicity in Arctic sea ice area anomaly. But if you kill everything in the Fourier spectrum except a narrow frequency range, whatever humps happen to exist in that frequency range will stick out like a sore thumb, and give you the false impression that you’ve identified strong “unforced” cyclic variations. You haven’t.
It’s one thing to play around with different analysis methods, compute derivatives, apply filters, computer Fourier spectra, and suggest what you consider interesting possibilities. Heck, that’s science. But when you operate under the misconception that you know a lot more about data analysis than you do — when you declare “strong” cycles where there’s no evidence of them — and when you then proceed to scold the scientists who have actually spent their lives studying the data you think you understand better than they do, then you’re guilty of hubris.
And that’s much of what’s wrong with the online discussion of global warming. People look at some data and try to understand it, but they’re in way over their heads and haven’t a clue about what they don’t know. Then, instead of inquiring whether or not they might have gone wrong and how — as they should — they declare that the real experts are missing something.
In fact, this seems to infest just about everybody in the fake “skeptic” camp. Even those with absolutely no clue about how to analyze or understand data (you know who you are) will proclaim that they understand it better than the experts who’ve spent a lifetime working on it. And those who do have some knowledge — I’ll bet Greg Goodman is a pretty smart guy and sure as hell knows a lot more than, say, Anthony Watts — might be even more dangerous. Often, they’re less inclined to doubt their own conclusions, more inclined to think they know better than those who really do know.
Maybe Greg Goodman is a smart, open-minded guy who will realize that he went astray. Maybe he’ll conclude (as he should) that he’s in over his head. He needs to do two things. First, he needs to accept that his analysis isn’t right. Second, he needs to admit (to himself) that he’s nowhere near as savvy about data analysis in general, and Fourier analysis specifically, as he thought he was. Who knows, he might even have an epiphany and impress the hell out of all of us. That would be an incredibly admirable thing to do.
As for Fourier analysis, he should buy — and read — my next book. It’ll be out in about a month, and it’s about Fourier analysis and its application to time series.
But whatever approach Greg Goodman takes to extend his knowledge, he needs to get back to school.