Ludeckerous

A recently published paper by Ludecke et al. in Climate of the Past claims, as its main result, that “the climate dynamics is governed at present by periodic oscillations.”


The authors took 6 long temperature records from central Europe, standardized them (divide anomalies by their standard deviation), and averaged them. They then computed annual mean values. That was subjected to Fourier analysis in order to identify what they call “significant” frequencies. They selected 6 frequencies, all with periods at least 30 years long, with which to model the temperature data. Comparison of the model to a 15-year moving average (boxcar filter) of the data gives a correlation coefficient 0.961. Presto! — “the climate dynamics is governed at present by periodic oscillations.”

If you want to know a little more about the data itself, the Rabett has some info on that.

First, here’s my short opinion of this paper: Rubbish.

Let me elaborate. All they’ve done is model the data as a low-frequency Fourier series, then compared that to the low-frequency (boxcar filtered) version. Of course it gives a good match, especially since the actual trend present in this data is dominated by low-frequency fluctuation. In essense, all they’ve shown is that an arbitrary function can be modelled by a Fourier series. Really. Truly. That’s all.

They base their confidence in the non-random nature of their fit on this:


The Pearson correlation of the smoothed record SM6 with the reconstruction RM6 (black and red curves in Fig. 6) has a value of r = 0.961. In order to ascertain the statistical confidence level of this accordance, we assumed a null hypothesis and evaluated it by Monte Carlo simulations based on random surrogate records of the same length and the same Hurst exponent (a = 0.58) as M6 generated by a standard method (Turcotte, 1997) (the surrogate records hereafter SU, and the boxcar-smoothed SU over 15 yr hereafter SSU). As the null hypothesis we assumed that the accordance of the reconstruction RM6 with SM6 is caused by chance. We applied 10 000 surrogate records SU. Each of the record was analyzed following the same procedure as for M6. Next, for each surrogate SU the reconstruction was generated that used — again following the procedure as for M6 — six frequencies with the strongest power densities among the first eight frequencies of the DFT without zero padding. Finally, the Pearson correlation of this reconstruction with SSU was evaluated. As a result, among 10 000 SU we found one surrogate record with the maximal r = 0.960, 9 records with r > = 0.95, and 53 records with r > = 0.94 . Therefore, the null hypothesis could be rejected with a confidence level of > 99.9%.

But this totally ignores the fact that there is a trend in the data, that the trend is low-frequency, so a low-frequency Fourier series will match much better than a low-frequency Fourier series to a random time series. Really. Truly. They are actually so far removed from reality that they don’t even know what they’re doing.

Allow me to illustrate.

We don’t need to average 6 European temperature records to reproduce their result. Let’s just use one: Hohenpeissenberg, Germany. Here’s the data, annual averages from 1781 through 2011, together with a very slow smooth just to show the very long-term pattern:

Hohen1

Let’s model the Hohenpeissenberg data using Fourier frequencies corresponding to periods at least 30 years long. I don’t need 6 frequencies, I can get by with just 5, and I too can compare that model to a 15-year moving average:

Hohen2

As the graph indicates, I too can get a bitchin’ good correlation coefficient between the multi-frequency model and the moving averages. In fact, at 0.9617 it’s just a teensy-weensy bit better than the 0.961 they reported for their 6-station average.

Let’s try something else. Let’s take the very long-term pattern in the Hohenpeissenberg data — the red line in the very first graph — and add to that, plain old random noise with the same standard deviation as the residuals of the Hohenpeissenberg data from that very long-term pattern. This will give us some artificial data consisting of a very slow — and not periodic — signal, plus random noise. We’ll see what a Fourier series does when we do not leave out the trend.

We’ll model that artificial data using frequencies corresponding to periods at least 30 years long. I’ll only need 4 frequencies to do the job. Let’s compare that to the 15-year moving average, and compute the correlation coefficient between them. Here it is:

artifice

Well well … this time the correlation coefficient is just a hair over a whopping 0.98. I didn’t have to generate 10,000 artificial data sets to surpass their correlation coefficient. Just 1.

Applying the logic of Ludecke et al., “Presto! — the random noise is governed at present by periodic oscillations.”

65 responses to “Ludeckerous

  1. Six frequencies? According to the old joke, that’s enough to make the elephant wiggle its trunk!

    [Response: It’s more than enough. You have three parameters per Fourier component (frequency, amplitude, phase) and one constant for a total of 19 parameters.]

  2. Pure Magic!!! I was waiting for your comment. There are not enough 2x4s to smack them back to reality.

  3. I’m repeatedly appalled by these folks who curve-fit climate data, mostly with Fouriers, ignoring all physics and any causal relationships for nothing more than descriptive patterns, and claim that they then understand what’s going on. Frequency analysis of a limited period has no predictive power outside actual cyclical behavior seen within that period – and to isolate anything like that you need to _first_ account for non-cyclic behavior, such as actual forcings. Which these silly exercises never seem to consider…

    I was not surprised, incidentally, to see several references to Scafetta in Ludecke’s paper – Ludecke work demonstrates similar “cyclo-drama”.

  4. Horatio Algeranon

    “Whirling Disease”
    — by Horatio Algeranon

    “Whirling disease”
    Afflicts the “skeptic”
    Makes him swim
    Around affected

    Without a bowl
    In open water
    Round and round
    As temp gets hotter

    Dippycycles
    Twists and twirls
    Four hoorays
    For climate whirls!

  5. The eternal lure of periodic functions for denialists is that when you use them to fit a rising function, you can get an arbitrarily good fit, but eventually they must decrease! The model has no predictive power, but that is not what they are after. Their purpose is self-delusion–and Fourier analysis serves admirably.

  6. Trivial typo: “Here’s the data, annual averages from 1981 through 2011,…”. Actual start is 1781, I guess.

    [Response: Right you are.]

  7. “You have three parameters per Fourier component (frequency, amplitude, phase)”

    Do they use some kind of nonstandard Fourier series where the frequencies are free parameters, because in the normal version the frequencies are just multiples of the base and thus not free to adjust?

    [Response: You can also allow frequencies which are not multiples of the base. What they did is to pick and choose *which* frequencies to include and which to exclude. That’s not quite a “parameter” but it’s certainly degrees of freedom. But, if you want to count only 13 parameters instead of 19 …]

  8. I assume you mean annual averages from 1781 to 2011 (and not 1981) with reference to the first graph?

  9. I keep an eye on new papers in CPD but somehow managed to miss this one.

  10. Scafetta: Lüdecke, et al should know it can all be ascribed to the planets, as one can see by watching Scafetta talk in 2009, or flipping through the 76-page slide deck (or an hour talk) to p.62.

    Immersion in this material could be useful in preparation for anyone before reading Lüdecke, et al.

    Cycles galore, Rhodes Fairbridge, too.

  11. In what circumstances is Pearson correlation the most appropriate way to compare two time series?

    • For example, take a flat line y = C. Add any sort of noise to get the time series y’. Calculate the Pearson correlation between the time series with and without noise and you get a very low number. Do the same thing for y = x + C and you get a very high Pearson correlation.

      So it’s maybe not so much that such a short time series that’s been low-pass filtered is inevitably going to be fit pretty well by a few Fourier components (although this is also true), but that the Pearson correlation for a trendless data series vs. a fit to that data is inevitably lower than the Pearson correlation for a trended data series vs. an equally good fit. Had the authors instead compared the DFT recompositions to the filtered data based upon residuals or cross correlation or anything better suited to comparing two time series they could not have come to the same conclusion.

      Given that the Pearson correlation is not a very common choice for this task (because while a high Pearson coefficient shows correlation, a low Pearson correlation shows nothing), it begs the question of whether any other methods were tried.

  12. Most amazing is this:
    “nach Eduardo A) die Reviewer Experten zur DFT Analyse und hervorragende Klimaforscher waren”
    http://scienceblogs.de/primaklima/2013/02/22/artikel-von-eike-pressesprecher-ludecke-et-al-veroffentlicht-in-climate-of-the-past/#comment-47279
    That is, according to Eduardo Zorita (the handling Editor of this paper) the reviewers were experts in DFT analysis and excellent climate researchers.
    Note, it appears this refers the review process *after* the open review.

    [Response: The comments from the reviewers don’t exactly endorse this nonsense. In fact if I’m not mistaken, Manfred Mudelsee asked to be taken off the list of potential reviewers for CPD because of this fiasco.]

    • Maybe I wasn’t clear enough, so I’ll try again: there were further reviews *after* the open review process.

      It’s step 8 in the review process at CPD:
      “In view of the access peer-review and Interactive Public Discussion, the Editor either directly accepts/rejects the revised manuscript for publication in CP or consults referees in the same way as during the completion of a traditional peer-review process. If necessary, additional revisions may be requested during peer-review completion until a final decision about acceptance/rejection for CP is reached.”
      If Georg Hoffmann correctly cites Zorita, the Lüdecke paper went through two additional rounds of review.

  13. Isn’t the whole reason the temp anomaly prior to 1900 on the first figure is comparable to modern anomalies due to the fact that their thermometers weren’t shielded from direct sunlight, and the data need to be adjusted accordingly? If so, that alone is enough to discredit the paper.

    It’s like trying to rely on atmospheric CO2 measurements prior to Keeling. Pretty much a complete waste of time.

    [Response: Follow the links to Rabett Run for much more of the story on the data.]

  14. Gavin's Pussycat

    So they can do Fourier analysis followed by Fourier synthesis without stepping on their shoelaces. Whoop-dee-do.

  15. The most infuriating thing about this mathurtbation is that the clowns from EIKE, where Luedecke is spokesperson, (they claim to be an “Institute”, consist of a letterbox and are a just a spin-off from CFACT) now can claim that they have a peer-reviewed paper in an renowned journal!

    Manfred Mudelsee resigned as reviewer at Climate of the past in consequence:
    “I am less pleased that this piece has been published in CP since I believe that (even in its revised version) it has serious technical flaws. I had appreciated if the handling editor had considered more seriously my technical comments on CPD. Finally, I had appreciated if I had been informed/shown the revised version sent to CP. Unrelated to the technical flaws, one may speculate about (I exaggerate for clarity) the hijacking of CP for promoting ‘skeptical’ climate views.
    I would appreciate if you took me out of your database of CP(D) reviewers.”

  16. There seems to be a push on the ‘it’s all cycles’ meme. This is eerily resonant with Tung & Zhou (or was it Zhou & Tung referencing T&Z – I can never remember) regressing the AMO against GAT and finding that… it’s all cycles and AGW is feeble.

  17. Since I’m not a regular here perhaps I should point out that this post on the AMO is very helpful in demonstrating why T&Z is wrong in method and conclusion.

  18. Doesn’t this journal have any peer review? If yes, have the “peers” ever taken a math beginner course?

    [Response: It seems that the editors didn’t exactly leave it up to the reviewers.]

    • As far as I know, a second round of reviews was solicited. But for those who submitted only short comments (I was one of them), no further information whatsoever was given after the final editor comment appeared. Judging on its rather obvious notion that the paper isn’t exactly publishable in the current form, it came quite as a surprise to receive an email update in which we were informed that this piece got published after all. Minor revisions have been made, but all the major (ridiculous) flaws are still contained. I am puzzled as to what went on here. Whoever is responsible, the reputation of CP has been damaged quite significantly … not to mention that it thwarts the process of open review to some extent.

  19. I needed to think about their null hypothesis a little, to consider whether it is a good test or not, and why. My understanding is that the Hurst exponent will only deal with the auto-correlation, and will be relatively useless at picking out underlying trends, so their control test was not really a 1-1 comparison. Autocorrelation matters for a fair test, but so does underlying trend.

    IOW, they showed that a trend with random noise can be modeled quite well as a low-frequency periodic oscillation with noise. Particularly over half a cycle. This is really not news, and you’re drawing conclusions from a sample size of n=1/2.

    If you could repeat this experiment with temperature records that spanned 1,000s of years, and it showed the same result – a periodic oscillation of ~X years (X < 100, such that n is high enough to be significant) – then it would mean something, no? It'd hint at some mechanism that was driving the climate on those timescales. But over a 30-year timespan, it's statistically meaningless and indistinguishable from a noisy trend.

    Notably, this is also the most common complaint I've heard about the Foster and Rahmstorf (2011) – that the underlying trend is not statistically distinguishable from a partial cycle of an oscillation; that the trend attributed to GHG in that paper could in fact be caused by the AMO (or any other low-frequency cycle).

  20. Horatio Algeranon

    “Hooray for Fourier”
    — by Horatio Algeranon

    Elementary, my dear Wattson
    The math of Fourier
    A math decomposition
    Of temperature in a way

    That makes it look like cycles
    Are governing the day
    Instead of greenhouse gases
    Hooray for Fourier!

  21. I assume that to get a rising trend approximated by sine curves, you need the lowest frequency curve to have a period more than maybe three times the full interval and set the phase to match the flattest part of the rise?

    Doesn’t that mean there’s an argument to be made that if you claim something’s periodic rather than having a rising trend, your Fourier transform shouldn’t contain anything with a period longer than the data you’re trying to fit?

    • My own understanding of time series analysis is very limited (a problem I hope to ameliorate in the coming year) but that’s my understanding as well. You could theoretically fit a pair of sinusoids to any dataset where those sinusoids have an arbitrary (shared) frequency, but frequencies higher than the sampling rate are of course indefensible, and frequencies lower than the time window allows are, I would argue, just as indefensible.

      If I did this, I myself might want to find at least two full cycles before I claimed that the data exhibited cyclical behavior, but I am not sure what the standard is (or, rule of thumb, etc.).

  22. Well, on the less dark side (I was going to say “bright,” but I’m not that optimistic), at least this is another “skeptic prediction” for future temperatures that will go down in flames as subsequent data come in and it proves unskillful compared to physics-based projections.

  23. Lüdecke is basically Germany’s Anthony Watts. They run a private blog named EIKE (European Institute for Climate and Energy) that consists mainly of translated content from WOWT. Likeweise they often block comments that disagree with their position but it might be worth a try. Currently their website is offline.

  24. The 15-year moving average filter has already crushed the life out of all but a few of the Fourier frequencies so it’s no wonder that it can be fit with a few of them since they are they only ones left. They temperature record covers 254 years with one data point per year. This means that any frequency component that has more than (254 / 15) = about 17 oscillations within the 254 year span is significantly suppressed. Even the higher frequency ones within this are suppressed. (Google frequency response of moving average filter). So sure, the first few Fourier components fit the data since all the other ones have been suppressed by the filter. Duh.

    • Just to be clear, the application of the moving average filter is the key to fitting the data to a small number of Fourier components. For example, random data (white noise) without some low pass filter has a poor fit to a subset of Fourier components because the Fourier transform of white noise is uniform (that’s why white noise is called white…every frequency has the same contribution). Applying the filter suppresses some components so that a fit to a subset works.

  25. I don’t know enough on this to know, but there are possibly parallels with the de Freitas/Climate Research case. Now, Zoriita is hardly ~ de Freitas, but it really depends on the editorial setup. In the CR case, there was no Editor-in-Chief, so de Freitas basically had complete editorial control over papers that came to him. When I was writing Pal review… I talked to a few folks who’d refereed papers, advised rejection, and were ignored.

    I don’t know the answers, but the rlevant questions are:
    1) Does the handlling editor, i.e., Zorita, in this case have accept/reject authority?
    2) If not, does one of the (5?) Editors-in-Cheif have taht authoriyt? Or some combination?

    The general issue is that of delegation of authority in organizations. It is efficient and generally good to push decision-making downward as far as possible, but quality control matters also. Sooner or later, there needs a person or group that can make accept/reject decisions, even if that means ignoring referees sometimes. But the responsibility must be clear.

  26. Does this mean this journal may struggle to publish papers in future as reviewers refuse to deal with them?

  27. Unfamiliar with this journal so how does the rank on the how-big-a-deal is this scale? Seems like the numerology in PNAS a couple weeks ago is a bigger deal, but this isn’t my field. Always interested in how this stuff gets published; I don’t think political pressure on editors is necessarily a big part of it. I can think of examples in a few fields from the past couple years where papers that are basically numerology were published and pretty influential. Usually they involve claims of some seemingly simple model fitting previously published data, showing that you can harmonize data from many experiments with some rescaling procedure, or claiming that some interesting correlation is exceedingly unlikely to occur by chance.

    • David B. Benson

      Climate of the Past *used to* have quite decent reputation.

      • Horatio Algeranon

        “Climate of the Past”
        — by Horatio Algeranon

        The climate journal
        Is “Of the Past”
        Who could have guessed
        It wouldn’t last?

  28. How does crap like this get published? A week ago JGR turned down my drought paper… but an exercise in curve-fitting gets published as “research?”

  29. There aer 5 co E-i-C’s an a bunch of editors, here. If people know any of them, they might ask them politelyif they know what’s going on, and how the editorial process really works.

  30. “We thank Lubos Motl for his technical information about
    the Prague record.” at the end of the paper.

    Well, another really known name – everytime such a paper gets out, we are sure to find the usuals.
    I think I will create a “skeptical” paper bingo.

    Thinking that doing a Fourier analysis, taking the 8 first components, and comparing them to the same data they came from allows for a paper … I should change my field of expertise to get papers published.
    Not far worse than the case studies in geophysics, but still …

  31. Quite nice background of Eduardo Zorita, responsible editor and skillfull
    ad hominem writer here :
    http://rogerpielkejr.blogspot.de/2009/11/eduardo-zorita-on-climategate.html

    • Oh, wonderful. Zorita has theories of climate-science “machination, conspiracies, and collusion”, bitter claims that his work won’t reach publication due to his speaking against the cabals, that authors are “bullied and subtly blackmailed” to political correctness – just the kind of unbiased and rational person you want editing a journal on climate .

      More seriously – this man should not be an editor of a science journal. And his publication of Ludecke et al, over and against reviewer recommendations, confirms that. He’s clearly more interested in his viewpoint than in paying attention to others in the field.

      • Again, the editorial structure of the journal matters.
        In computer design, we used Error Correcting Codes to deal with the inevitable failures of memory and storage, and for systems and networks, we design to avoid single points of failure. (That was one of the APRANET’s motivations, i.e., origin of the Interrnet.)
        Many journals require signoff by an Editor and then an Editor-in-Cheif.
        As in PALS, Climate Research had multiple single-failure points, because each editor completely controlled every article that came their way.

        One can imagine at least 2 ways Climate of the Past’s editorial process works:
        a) Editors-in-Chief (5) select editors, distribute incoming papers to them, help get reviewers, but accept/reject decisions are left to the editors.
        b) One or more E-i-Cs reviews every decision and makes the final accept-reject decision.

        In case a), there is a systemic single-point-of-failure problem, with a flawed process that needs redesign. In case b), the process design was OK, but failed in this case and the issue is to ask why.
        Case a) is a design issue, case b) is a quality control issue.

        It is of course nontrivial to design workflows resistant to human error, although in the case of journal publishing there are enough good/bad examples to help.

  32. Horatio Algeranon

    We sure are learning a lot (at least Horatio is)

    From Carter we learned that if you filter out the low frequencies,”The close relationship between ENSO and global temperature, as described in the paper, leaves little room for any warming driven by human carbon dioxide emissions.”

    Now, from Ludecke we learn that if you filter out the high frequencies, “periodicities (no matter what causes them) explain the temperature history without assumption of forcing by CO2.”

    There would seem to be only two things left to do

    1. Filter out just the “middle” frequencies and show there is no need to assume CO2 forcing to explain the data.

    2. Filter out ALL frequencies (ie, everything) and show there is no need to assume CO2 to explain the data

    Horatio will take on number 2:

    QED (as demonstrated by the above blank line)

    Anyone want number 1? It might be a little trickier.

    • Pete Dunkelberg

      Don’t forget step 0: ignore physics.

      • Horatio Algeranon

        “physics doesn’t matter” is a given, a basic hacksiom which doesn’t have to be proved (or even acknowledged).

        Metallica summed up the attitude nicely:

        “Never cared for what they do
        Never cared for what they know
        But I know

        Never analyzed temps this way
        Data is ours, we do it our way
        All these cycles from Fourier
        And nothing else matters…’

  33. > use them to fit a rising function, you can get an arbitrarily good fit,
    > but eventually they must decrease!

    A restatement of Herbert Stein’s Law:
    “If something cannot go on forever, it will stop”

  34. Over at Prima Klima in the comments the push back against Tamino and Eli is starting because we have been mean to poor Eduardo. As was pointed out above Eduardo Zorita has not been exactly shy about those he considers to have politicized science. It’s hard to get mad at Luedecke, he does what he does. Zorita OTOH stands unmasked.

  35. Curiously enough their Fourier transform fit makes the 15 years of no warming go away. Their extrapolation says warming will keep on going up for another 15 years.

    Another pointy stick to poke the stubborn beyond belief with perhaps!?

    • Their fit shows an extrapolation into the future predicting that temperatures will plummet “mainly due to the ~65 year periodicity”. So where does this 65 year periodicity come from? Well, the periods of the Fourier components are determined solely by the length of the sample, nothing more. Ludecke’s sample length is 254 years, which makes the Fourier component periods, 254/1, 254/2, 254/3, 254/4 etc. years. The ~65 year periodicity is 254/4 years. So if they had a different sample length, they would have fit to different periods! For example, if they had just used the last 200 samples, they would be talking about a ~50 year period. It is just stunning that this was published.

  36. > vvenema … Alan Carlin
    2011, that was Carlin’s “not-the-EPA” article; it surely sank out of sight
    with only one single bubble, I mean citing link, and that from JC’s blog!

    http://scholar.google.com/scholar?q=%2210.3390%2Fijerph8040985%22

  37. OT, but you can help to sign this petition of prof. Ranga B Myneni, IPCC co-author with 12 000+ WOS citations…:

    http://yourclimatechange.org/

    Cheers,

  38. The editor of that paper is the same Eduardo Zorita who by curve-fitting suggested that sea-level rise has been slowing down:
    http://klimazwiebel.blogspot.de/2010/04/cant-you-see-acceleration.html
    No wonder he has sympathy with Lüdecke’s methods of data analysis. Zorita has called for banning Mike Mann from IPCC – presumably he’d prefer to see Lüdecke as IPCC author.
    I understand that Manfred Mudelsee is not going to review for Climate of the Past any more – I also have lost faith in the integrity of their review process and will neither publish there again, nor review for them, as long as Zorita remains editor.
    The whole thing reminds me of the scandal at the journal Climate Research a few years back, were a similarly ludicrous skeptics paper was published, in the end forcing the chief editor Hans von Storch to resign. Anyone remember the details? Is it just coincidence that von Storch is Zorita’s boss?