NASA’s Goddard Institute for Space Studies (GISS) has updated their global surface temperature estimate to include November 2013. It turns out that this most recent November was, globally, the hottest on record:
Greg Laden posted about it (and other things) recently in his continuing efforts to let people know what’s really happening to the globe (it’s still heating up) as well as spreading the word that “earth” includes a lot more than just the atmosphere. He featured this version of the graph (provided by “ThingsBreak” but prepared by Stefan Rahmstorf):
Of course this means that the fake skeptics must come out of the woordwork. Referring to the smooth (the red line on the graph), here’s what Paul Clark had to say about it:
It’s not clear how this red line was obtained. The red line is not described on the poster’s page. The graph comes from, what Laden describes as, “climate communicator” ThingsBreak. What on earth is a “climate communicator”?!It seems to be some type of smoothed moving average. Five year spline perhaps?
Problem is, the red line is roughly in the middle of the blue line, except at the end. At the end, the red line is not in the middle at all, but is down at the beginning, and up at the end, of that final 10 year period. It’s shooting right up at the end!
How can that be? I therefore find this line to be completely made up, and a case of wishful thinking.
Here’s something about which every honest participant in the discussion of man-made global warming should think. Carefully. Namely, this: Paul Clark complains that it’s not clear how the red line (the smoothed version of the data) was obtained. Furthermore, it doesn’t seem right to him. How does he react?
Did he acquire in-depth knowledge of smoothing techniques? (I can tell you for a fact: no he didn’t.) Did he consult a disinterested expert? (Apparently not.) Did he, oh I don’t know, maybe ASK how it was obtained? (Nope.)
You see, those are some of the ways an actual scientist might proceed. The guiding principle being this: LEARN MORE ABOUT THE SUBJECT *BEFORE* YOU OPEN YOUR MOUTH.
It seems that’s not Paul Clark’s way. He doesn’t think the smooth (red line) looks right, but with little to no effort at all to find out about it, he declares that it is “completely made up, and a case of wishful thinking.” I declare that Paul Clark’s opinion is completely mistaken, and just about as clear a case of the Dunning-Kruger effect as you’re likely to find.
Here’s something else worth thinking about: suppose I wanted to make the slope at the end artificially large. What smoothing method — other than “force it by hand” — could do that?
Rahmstorf used a smoothing method based on MC-SSA (Monte Carlo singular spectrum analysis, Moore, J. C., et al., 2005. New Tools for Analyzing Time Series Relationships and Trends. Eos. 86, 226,232) with a filter half-width of 15 yr. I get a very similar result using my favorite method (a “modified lowess smooth”) with about the same time scale.
My modified lowess smooth is in agreement with Rahmstorf’s MC-SSA smooth. Here’s just the modified lowess smooth (in red), a plain old plain-old lowess smooth (in green) for those who don’t trust me to modify anything, and a spline smooth (in blue):
One of the things I like about my own smoothing program is that it also calculates the uncerainty of the result. Here are the three smooths I computed, together with dashed red lines to show the range 2 standard deviations above and below:
The three methods are in agreement, within the limits of their uncertainty. Clearly.
Now let’s take the range of the modified lowess smooth which we plotted in the previous graph, and add some other smooths set to about the same time scale for smoothing: an ordinary moving average in black, a Gaussian smooth in green, and a 6th-degree polynomial (as used by Paul Clark himself) in blue:
The moving-average line stays within the range indicated by the modified lowess smooth, but that’s easy because the moving averages don’t extend to the ends of the time series, we lose years at both the beginning and end. The Gaussian smooth stays within the range indicated by the modified lowess smooth except at the end, when the Gaussian smooth levels off. Is Paul Clark wondering why that might be? Does he know enough about smoothing in general, and about Gaussian smoothing specifically, to have expected that? I did.
Perhaps most interesting is the 6th-degree polynomial, which wanders outside the modified lowess range, not just at the beginning or end but in the middle as well. What’s really interesting is why it wanders outside the range, because it happens for different reasons at different times! The 6th-degree polynomial fit smooths too much in the middle of the time span, but smooths too little near the endpoints. Is Paul Clark wondering why that might be? Does he know enough about smoothing in general, and about polynomial fits specifically, to have expected that? I did.
Ordinarily, this is where I would launch into a technical discussion of smoothing. Why do certain methods tend to go one way more than another? What should one expect near the endpoints of the time span? How do smooths with longer time spans compare to those with shorter time spans? Why is the Gaussian smooth questionable near the endpoints? Why do high-degree (and 6 is a pretty high degree) polynomial fits really really suck as smoothing methods, especially near the endpoints of the time span. Yes, they really suck, and the reason is actually quite interesting.
But I’m not gonna. At least not yet. It’s not my job to educate ignorant Dunning-Kruger victims about smoothing techniques.
But here’s an offer for Paul Clark: Come to this blog, find this thread, and post a comment in which you admit — without a bunch of caveats or excuses or bullshit — just admit in no uncertain terms that you don’t know enough about smoothing to know how valid Rahmstorf’s MC-SSA smooth is or why your 6th-degree polynomial choice is a really really sucky choice. You don’t have to weep and moan, just simply admit that you don’t know enough about this topic to justify your opinion. You don’t have to admit anything else, just that you’re ignorant about smoothing methods. Don’t clutter the comment up with unrelated stuff, if you want to spew about other things put that in a separate comment. Just a single, simple admission of ignorance on this topic.
If you’ll do that, Paul Clark, then I’ll do a blog post on smoothing. Or maybe two. Maybe even three — it’s a topic of great interest for me. How ’bout it, Paul? All you have to do is admit that you’re ignorant of the subject, and I’ll educate you.
In case that offer isn’t acceptable, here’s another. Paul: I’ll blog about the topic and you don’t even have to admit anything. But if you want me to supply some lessons without you admitting your ignorance — pay me. Cash American.
Nice to see you back, Tamino – we’ve missed you!
Not that anything has changed : )
Placing a GHG blanket over the Earth keeps heat from escaping. We would expect to see the results most when solar input is lowest – winter and nights.
Is anyone doing the nighttime temperature plots? The ‘deep winter’ plots?
Take a look at winter temps north of 80 degrees for the last few years. Winter has been heating up.
http://ocean.dmi.dk/arctic/meant80n.uk.php
I suspect there’s an interesting story to be told if someone wishes to dig into it. At the moment we seem to be looking for our lost keys under the street light because the light is better rather than looking where we dropped them.
I admire your patience to expose and educate those who are ignorant, yet self-assured on matters of basic statistics. Sadly, there must have been teachers that passed these pseudo-statisticians in their calculus or statistics classes … or, wait, maybe they never took those classes, as it is hard to fake them?
[Response: In my opinion, a proper introduction to statistics is more often than not sorely lacking in a scientist’s education. How many people get a Ph.D. in math (or physics or other hard sciences) but have no training at all in stats? Too many.
In fact at the moment I’m looking at the statistics used in much of the published literature on sea level rise. It’s not very encouraging.]
I can attest to what Tamino is saying. In my physics classes, there was NO introduction to statistics, just some quick mention of it when a specific tool was needed, without going into the details. The problem becomes apparent when you start digging for some reading material: it’s too elementary and treats complex tools as magic or simply too advanced. If there is a good middle-ground book on statistics for people who are already good at math, I have yet to come across it. Recommendations are welcome. :)
I agree with both of you, Tamino and BojanD as I have been trained initially as a physicist without any classes in statistics myself. Evolving into a physical oceanographer, I now try to teach some of the more basic (old-fashioned) statistical techniques to extract signal from noise: The 1986 book by Bendat and Piersol entitled “Random Data: Analysis and Measurement Procedures” is still my favorite source of what I consider first basics, but it is a hard swallow for students with a soft science background.
[Response: Maybe it really is time for a new standard course of study: “Statistics for Physical Scientists.”]
Looks to me like the 12 months ended Nov. 31, 2013 is the 4th warmest (ties included) in the record:
Dec thru Nov:
2009 – 2010 – 68
2004 – 2005 – 65
1997 – 1998
2001 – 2002 – 62
2005 – 2006
2008 – 2009
2012 – 2013 – 59
Last 23 months, warming at .7C per decade. It’s abrupt climate change!
nice post Tamino.
Typo 3rd para from bottom. “Opion” instead of “opinion”.
and if you have time I vote the smoothing master class anyway!
[Response: Indeed it is one of the most interesting, and most useful, mathematical methods in general.]
Typo. This bit: “But here’s an offer for Paul Kruger: Come to this blog, find this thread, and post a comment in which you admit…”
Was probably supposed to refer to Paul Clark, not Paul Kruger.
[Response: Thanks (you’re not the only one who noticed), fixed.]
“Not than anything has changed.” :(
But yes, welcome back, Tamino! Funny, I’d just noticed that this was the warmest November anomaly for GISTEMP, and mentioned it at RC. Didn’t think that much about it.
Then, lo and behold–!
Thanks, but –
Why not forget about the Paul Krugers of this world and tell us interested public about good smoothing at the edges of a data set, where only a smaller number of measured values are available, and asymmetric to the x-value at that? Can imagine a lot of people beeing eager like me.
[Response: It’ll take some time (I’m working on a big project right now) but … it is what I do.]
The Moon Landing Hoax vibe is strong there too!
Said another way, Ben, “Teh Stupid is strong on this one.”
:D
Tamino. Welcome back. I miss your blog when you don’t post.
Hey Tamino, good post. No excuses from me, but a retraction. After reading your post I take back my claim of “wishful thinking”. I was out of my depth on statistical analysis and admit I based my beliefs mostly on a hunch. You therefore successfully defended the integrity of Laden’s article.
I appreciate the time you took to critique my post, I’ve never heard of “Ramsdorf” etc before and learned a lot from your post.
And yet something still looks fishy about that trendline. There are limitations on any trendline based on stochastic data of limited length (including the sixth order polynomial as you mention).
I figure, again based on a hunch, that the longer the spline or smoothed period, and the shorter the data set, the more it doesn’t take into account the effect of the end points of the data series, hence the (seemingly) unwarranted upward deflection at the end of the red trendline in that graph.
In the end only more data can resolve the issue; I guess the next few years will tell.
You deserve credit for that. Well done. Please do not be deterred by whatever snark you may receive here or elsewhere in response. True skeptics, unlike deniers, respond to evidence and new information, as you have done in this instance.
I echo Dr Lewandowsky’s encouragement and congratulations for having the humility to admit you were wrong. I would encourage you to try to understand the science and stats better.
On your lunar-hoax stuff, I’m sorry, dude, but that is just embarrassing. The Jade Rabbit is up there right now roving around the surface–and I assure you that landing a spacecraft is much more difficult than orbiting one.
Your hunch is correct, in that the uncertainty is greater at the beginning and end. That’s fairly obvious even to the layman. The increased uncertainty is actually shown in Tamino’s last two graphs.
BUT! Uncertainty cuts two ways. In this particular situation, we can see two things from the data: the real trend may be slightly higher or slightly lower at the end of the series, and also, the increased uncertainty is still pretty small – there is no reasonable way you can get a dip at the end with this degree of smoothing. Tamino will now hopefully explain why, in great detail and incredible clarity.
LOL! Probably Clark’s reply will be up here by the time my post gets on, too, but if not, more hilarity will ensue. After an open retraction of the claim of the trendline being “wishful thinking” and admitting he knows insufficient about statistics, Clark does a full 180 and states “And yet something still looks fishy about that trendline.” followed by more handwaving.
The DK is strong in this one.
[Response: It takes at least some measure of enlightenment to retract a claim while admitting that one’s knowledge is insufficient. So let’s give credit where it’s due. And, accept that progress often comes in small steps. After all, how can we expect him to keep an open mind if we won’t?]
Paul Clark is also the producer of the silly “How deniers view global warming” video.
The one were he merges global temperature series with a Greenland icecore temperature reconstruction.
I think this item needs a little bit more debunking, that video has appeared in many places and deniers are just ignoring the observation that Clark is using a single location, of course Clark declaring it “viral” with less than 20,000 hits is also funny, but many deniers are just using it to “teach” others how the current warming is “not unprecedented”.
From Clark’s blog today:
“Update 2: Damn, Tamino critiques my post and wins, forces humiliating retraction from me. I now admit I don’t know if the trendline by “ThingsBreak” was a case of wishful thinking or not. I suspected it was, but after reading Tamino’s post I’m not so sure. More below.”
For me, the impressive part of the spline, LOESS & mod-LOESS graph is that tight-ended 2-sigma confidence interval. I suppose in some way, the levels of bendiness of the smoothed signal can be used to constrain the ends, unlike in, say, gaussian smoothing where the confidence flies off North & South as the ends are approached.
Nice post. For those interested in an introduction on smoothing i could recommended chapter 24 of ‘Understanding Statistics’ by one G. Foster. Next read will be something on Lysenko in my attempt to understand how ideological mindsets can totally warp science.
The fake skeptics have painted themselves into a corner with their ”hiatus” claims. Now that the surface temperatures are starting to increase again (more an issue of energy moving from the Arctic, where temps haven’t been properly measured), they will not be able to point to a natural mechanism for this temperature increase, otherwise they would already have predicted such warming instead of cooling/flat temps,
The MSM will be all over them, demanding answers.
Maybe.
It doesn’t help that even climate change gurus like James Hansen were buying into the fake hiatus and publicly rushing for cover. The 15-year change of temperature attributable to greenhouse gases is only about a quarter of the short term variability due to all causes. 15 years is simply too short a time to declare a hiatus (cherry-picking the starting point aside). But alas even many a PhD does not grasp statistics as well as they think they do. How can we expect the unwashed and wishfully thinking masses to agree!
WELCOME BACK!
Re. curve fitting, it’s odd: I notice there “just happen to be” 5 points of inflection in the fitted curve!
Eh? A PhD without stats? Seriously? When I got my MLA in 1978 no single required course changed my world view more than qualitative statistics. I’m by no means expert, but I recognize when a discussion attempts to sound scientific with no understanding of the basic underlying relationship that results have to chance. And a PhD can be earned without this knowledge?
[Response: I find that there’s a most unusual state of affairs among the various sciences. For example: if you want to be an M.D. (rather than Ph.D.) probably you’ll have to take at least an intro to stats but if you’re a Ph.D. in math or physics, probably not. Then things can get even more ironic: when the post-docs are getting into their research, the medical guys (who had to take stats) are far more likely to walk down the hall to the stats dept. and ask for *their* opinion, while the math/physics guys (who never studied any stats at all) are far more likely to think they know it all, forego consulatation, and therefore draw the wrong conclusions. Then there are the guys who studied what is called “statistical physics” (a fascinating and powerful field of study, but NOT the same as statistics) and think that therefore they know about statistics.]
And welcome back. The complexity of some posts is like a good crossword puzzle, as I plod through it, line by line, Google at hand. Here’s to hoping that long post of explanation actually appears.
My son is currently interviewing for residency at teaching hospitals around the country. He hasn’t had a statistics class since high school.
Hi Tamino! Glad to have you back!
I’d be very interested in the smoothing masterclass too :-)
But some time ago I remember reading that you were finishing a stats book – maybe that includes smoothing. The only book advertised here is “Noise” (which I already bought). Is there a publishing date for the new one?
[Response: You can find it here.]
I’ve read the “Understanding Statistics” book, and would strongly recommend it to anybody wanting to learn the basics of statistics as it includes a discussion of the necessary (minimum of) theory rather than being merely a stats cookbook.
[Response: Which was my goal. Thanks.]
“The Freddy Krueger Effect”
— by Horatio Algeranon
Freddy Krueger denial
Was popular for awhile
But then it led
To many dead
And Elm Street lost it’s smile
Clark built himself a really nice house on sand with this one (look out for sea level rise!).
Even if his line is correct, which it’s not, then what? We should just ignore the huge rise from 1880-2010? Or maybe we treat the short term data as the noise that it is and focus on the long term trend? Nah, that would make too much sense.
As a medic just starting a stats MSc, who knows he knows little, can we please have the smoothing post?
[Response: Yes. More than one. But there’s a lot on the plate right now, so it’ll take a wee bit o’ time.]
Thanks, Tamino.
This attitude — “weaponized ignorance,” you might call it — annoys me to no end.
Often it comes with an overlay of passive-aggressive whinging, as in “There’s no scientific evidence of X” when the writer knows full well that there is a mountain of scientific evidence, only rather than try and critique it, they want to lure you into an effort to educate them and correct their ignorance, which they will fight manfully to maintain at all hazards — a battle in which the stupidity with which they are favored by nature grants them an unfair advantage.
Was playing a bit with the November data in R. R generates overdetermination warnings for 5th and 6th order polynomials and will not plot them differently from a quartic. However if one looks at the 1st through 4th order plots, it is quite easy to see why they were ignored by the original “skeptic”. They all “predict” much greater warming than a simple linear plot.
See graph in pdf file here: http://www.nfgarland.ca/tem.pdf
It does look like the quartic plot will predict a VERY warm medieval warming period if one carries the postdicting far enough back, so that is something for the “skeptics”, I guess.
That’s 2nd through 4th order plots, of course. Sorry re. typo.
Add my vote to master class on smoothing. Its a bugbear for me too.
And to Paul Clark, good for you, apologizing both here and on your blog.
Another type: “uncerainty.” But great article otherwise, Tamino-sama. Ditto on good to see you back.
Hey…I resemble that remark! (Though I do try not to fool myself into believing that I know about statistics.)
And, I agree with you that it is a strange state of affairs whereby we physicists, even statistical physicists, can go about never taking a statistics course. In a way, the statistics courses of the world have sort of been geared as service courses for students in disciplines not so mathematically-inclined. So, I think there is a problem on both sides: Statistics courses are not really geared to us AND we are then snobbish and think, “Oh, that statistics stuff is just explaining basic mathematics to the non-mathematically-inclined, without appreciating the fact that there is a lot we could learn from a good statistics course. I think there’s probably a good (albeit small) market out there for “statistics for mathematicians and physicists”.
[Response: I see this so often it’s ridiculous. The more math-savvy the scientist, the more likely they are to adopt the “I already know what I need to know about this” attitude. Sometimes they’re cured of it, when they submit a paper, by a helpful referee who seeks to gently enlighten them (I’m trying!), but sometimes this just makes people more entrenched.
The worst abuse of analysis (in my opinion), which seems to touch about every field of science, is to declare that periodic or pseudo-periodic behavior is present based upon insufficient evidence or no real evidence at all. But … but … but it sure *looks* periodic!]
Finally, I will add my kudos to Paul Clark for admitting his knowledge was insufficient. It’s not often one sees people willing to do that here on the internet.
…Having just now read some of Paul Clark’s other blog posts, regarding the greenhouse effect and the attribution of the rise in CO2 (not the warming!), I have to say that unfortunately he does have a lot more ignorance that needs to be dispelled other than just about statistics. Ouch!!!
I’ve also read a bit of Clarks blog – his multiple posts discussing the Apollo moon hoax are an amusing set of Arguments from Incredulity. Not a good indication of critical thinking, however.
Perhaps a useful reference for Mr. Clark would be Lewandowsky et al 2013, NASA faked the moon landing – Therefore (Climate) Science is a Hoax: An Anatomy of the Motivated Rejection of Science?
Good Lord??? He thinks the service module did not have enough fuel to “slow down to the Moon’s orbital speed”???? I’m speechless.
He was probably one of the people who thought that paper was a conspiracy!
A conspiracy against the illiterate, because using clay tablets at least kept those damn writers in check?
“In fact at the moment I’m looking at the statistics used in much of the published literature on sea level rise. It’s not very encouraging.”
I realise you’re short of time, but this is, I think, a much more useful and interesting area/topic to go into than some random blog post (tough smoothing in general is good too).
Welcome back, good to see you.
[Response: I’ve taken on the sea level thing as a sort of big project. I intend to publish a paper on it, as well as several blog posts. And yes, smoothing is on the list too. If anyone knows where I can get a “time turner” (a la Harry Potter), please let me know!]
Welcome back Tamino. I was afraid something happened to you.
Is there ANYTHING in nature which conforms to a 6th order polynomial?? Clearly global temperature is not just a simple-minded time series/ curve fit but dependent on a complex interrelationship of factors (including greenhouse gases).
That’s because the doctors are “de-centered” (“patient-centered”) and physicists are “self-centered”.
Good to have some classic Tamino to read again. I’m looking forward to learning more about smoothing. And about your work on sea level.
I’m not sure why you bother with Paul Clark though. His website is full of classic “I looked at it and it was immediately obvious that it couldn’t be correct” deductions from total ignorance. Everything from smoothed time series to the design of NASA re-entry vehicles. I think it’s a bit of a shame that you’re giving him the oxygen of publicity (even if it is only to point out how ignorant he is), but I’d be delighted if you do go ahead with some educational posts on smoothing!
Add one more welcome back.
Let me testify that my undergraduate and graduate training in Chemistry never included a specific course in Statistics-
Instead we had course about Scientific Research using E. Bright Wilson’s “An Introduction to Scientific Research” …which included statistics, but much more (like how to do a literature search pre-internet) I realized in graduate school that we doing “onesies” in many kinds of experimentation that would have been more effectively done as DOE. So that was the first post-graduate course I took, once I had the opportunity in an industrial environment.
Some of the problems of smoothing of course were discussed at some point- using too large a polynomial fit for example…but time series was always set up on the shelf as a subject for a later date: Chemical research rarely involves times series of the sorts involved in climate change: our time series are dumped into kinetic expressions of the Arhennius sort, so there’s always a directly coupled equation. (d[c]/dt= Ae^-[(concentration expressions)/RT].
Now- an obvious question- which you may of course chose not to answer- Your thoughts on Cowtan and Way? Since Stefan seems bullish on their refinements, I expect you’re at least neutral…but more broadly, is there some lesson in general from a statistical point of view?
[Response: The first lesson is probably about uncertainty: it’s there.
But in general, I’m impressed with Cowtan and Way. Ever since the Berkeley project I’ve thought that Kriging is the clear winner for addressing the data-coverage problem, I’m glad they went that way. Using the satellite data to *augment* coverage also seems like a no-brainer, and I’m impressed by the way in which they did it because it guards against bias introduced by drift in the satellite data.]
“Welcome back Tamino. I was afraid something happened to you.”
Same here – did you get two mails from me today? If not let me know your current mail address! Stefan
[Response: I haven’t even checked my mail yet. But I owe you a response, it’ll be forthcoming.]
Tamino, I disagree with you. I am an astrophysicist and I took a course on statistics as a graduate student. While it introduced basics concept, it was not very useful in practice. The key issue is that problems we had to face in physics are often atypical. Almost every situation is not described in basics books. The fact that most methods teach are frequencist does not help either. Bayesian method should be favoured.
I did a postdoc in a mathematics department as I explain to my colleague in physics statistics problem are either trivial or very complex. This is why beyond the simplest tools physicists tend to use Monte-Carlo methods. For simplest ones, typical stats package is good enough.
The other issue is that most physicist don’t care much about the p=value. They count in sigmas. Anything below 3 sigmas is somewhat doubtful and need further confirmation. They statistical thinking as three levels : likely random, interesting, almost certain.
[Response: You’re quite right that the problems physicists encounter often don’t “fit in” with what is taught in basic statistics — not even close! But sometimes they do, in which case a little bit of basic knowledge could save a whole lot of grief.
But I think you’re right in a way: the usual “basic statistics” course isn’t nearly as much help as it should be. Maybe we need to re-design the intro course; after all, the typical astrophysics grad student can handle a helluva lot more statistics than the typical social sciences undergrad, so let’s give it to ’em.
Perhaps more important, just introducing physicists to the fact that statistics is complex — it’s a whole field of its own — and no you don’t already know everything you need to know about this, would be helpful.
There’s also another side to the coin: the statistical “outsiders” often make substantial contributions *because* they encounter those situations that don’t fit in. Time series analysis for example: in astro the time sampling is almost always irregular, often plagued by quirks which make the analysis immensely harder, and sometimes the time sampling is downright pathological. That’s why astronomers have been at the forefront of techniques to deal with irregular time sampling.
As for science catching up with the Bayesian revolution — even statisticians are still fighting this battle.]
The most important lesson I ever learned about statistics was in a freshman biology lab course.
Namely, that it is sometimes possible to get different answers to the question “reject the null hypothesis?” simply by using different statistical tests.
So, of course, the lesson was that “all you really need to do is hunt until you find the ‘right’ test “.
:)
Come on Horatio – there’s some verse begging for release, based on based on decisions a priori and treatments post hoc.
“Statistical Significance”
— by Horatio Algeranon
To lie with stats
Just pick the test
The guise* of maths
Will do the rest
/////////////
* “guys” works too
“I’ve taken on the sea level thing as a sort of big project. I intend to publish a paper on it, as well as several blog posts.”
That’s great. Many thanks in advance for that. I’m looking forward to it.
“Make Way for Santa”
The HADCRUT trend was flat
Until Chris Krig’el sat
On satellites
O’er polar ice
How Christmassy is that?
Like paws give way to claws
And laws give way to Claus
And elves in tights
On Christmas nights
The “pause” gave way to “cause”
Having also noted that this November was the warmest on record a few days ago, and seeing as it may be the harbinger of a big el Niño coming up in 2014 (it’s looking likely at the moment), I’m thinking that Judith Curry’s stadium wave is going to be peaking about 20 years earlier than she expected it would. Since only minor/brief el Niño conditions in 2005 and 2010 were enough to make these the hottest years on record, what will a stonkingly big one do? I think it will make 1998 look like a walk in the park.
And then that leads one to wonder: will she be embarrassed that her elaborate curve fitting exercise (I wanted to use the ‘math’ word here, but thought better of it) led to nowhere, or just slough it off? Early days yet, of course.
It is surely Wyatt’s staduim wave, not Curry’s. Wyatt does acknowledge that other “made possible (her) dissertation” – Anastasios Tsonis, Sergey Kravtsov, Peter Molnar, and Roger Pielke, Sr. but Curry is only mentioned as giving “encouragement” and “confidence.” So if the ‘stadium wave’ has now become Curry’s it can only only be due to an act of ‘grandstanding’ on Curry’s part.
ETA: I’m reading tamino’s Understanding Statistics ATM too, but I’m only up to chapter 9. I see from above that smoothing is chapter 24. Well, I do have a long Xmas break this year…
“If anyone knows where I can get a “time turner” (a la Harry Potter), please let me know!”
I suspect Steve Goddard has one, as even the abysmally low quality of his content doesn’t seem sufficient explanation for the sheer volume thereof.
Trudat!
:-)
Besides discussing how to smooth a set of measured points it is IMHO important to try to get clarity about what one actually wants to achieve with this smoothing.
If I have an information transmission channel with external noise put on top of it, like e.g. a telephone line, the sense and goal of smoothing is quite well defined: to reconstruct the original information as true to the original as possible. If then the information transmitted has some redundancy, I can make predictions base on that redundancy.
Climate statistics is a different case. There, the single data points are the signal, are insolubly blended to the overall process. There are no complete outliers in a strict sense. There is no “external noise put on top of some information” – the noise *is* the information, so to say. Even if recent studies tend to deny the mere existence of a bottom-of-atmosphere (BOA) temperatur hiatus, it might well have been real! A redistribution of the absorbed power between bottom and top of atmosphere, land, sea and ice is absolutely possible and even the net radiatively absorbed power varies considerably.
So what exactly are we looking for, drawing some smooth line through our point cloud? It is not some “true signal”, we are looking for, because the data points themselves are the truth. It is something we can use to test our models. But how that – we can as well test them taking the probability field the give us and just putting the data points in, getting out some error measure? We implicitly use the assumption, that the maximum probability ridge of the model output should somehow look similar to a smoothed line through the data points. This is not the same. It has to be proven, that it is equivalent.
I think, this smoothing is more a psychological necessity, our desire to see the model – measurement similarity with our eyes, instead of a scientific necessity.
[Response: Perhaps we disagree. I suggest that just because “the signal is the noise,” not *all* of the signal is noise. If there’s a deterministic part (which is what I tend to call the signal) as well as a random part (which you might call a random signal but I’d call noise), then separating them, at least approximately, can be useful and enlightening.]
Kinimod,
I’m sorry, but this makes no sense. The individual measurements are not the product of a single process, but rather of a multitude of forcings and feedback. It makes perfect sense to try to isolate the influence of some factor we are interested in investigating (e.g. greenhouse warming) and calling that a signal. The rest is the noise for our investigation.
kinimod, the thing we call “climate” is basically weather averaged over area and time. Global temperature products average over area and a very small interval of time. Smoothing lets us see what happens on climate time-scales.
Just because something is “signal” doesn’t mean it is interesting or the subject of a particular focus. In climate research, the weather variability is simply noise, as Tamino says, and is exactly what you are trying to look past and filter out, to get a good look at the climate signal.
Obviously, global temperature products are used by many different scientists in many different fields, and for some of them, the weather and short term processes such as ENSO is exactly what they want to look at.
When you talk about models, you lose me completely. The ensemble mean is, obviously, not expected to be identical to either the actual temperature, or to a smoothed temperature. However, we do expect the actual temperature to lie within the envelope of the individual model runs and consequently the ensemble mean is a good way of describing the gross changes expected in global temperature. I think you underestimate just how much variability there is in a climate model run, on a day to day and month to month scale.
I admit to have used the term “model” somewhat different than most people. I did not mean one single computer program plus parameters, but the whole set of high level simulation programs plus appropriately varied parameters, which reflects the strive of mankind to reproduce the macroscopic process “climate” with a still comparatively small set of variables and relations. So this, you could call it “supermodel”, creates the probability field I wrote of.
Concerning the signal in the noise: Yes, it makes sense to abstract from the wiggles of year to year change. Only with climate, the signal is what we define it to be (e.g. 15 year means of some physical property). When we prefer one smoothing method over another, we don’t find the signal, we make it. This is, why it is so important, choosing smoothing methods, to think about what we want to make, what makes most sense to make.
Smoothing means minimizing some error function. Which to take and how to achieve minizing it should have some meaning.
Kinimod: “I did not mean one single computer program plus parameters, but the whole set of high level simulation programs plus appropriately varied parameters, which reflects the strive of mankind to reproduce the macroscopic process “climate” with a still comparatively small set of variables and relations. So this, you could call it “supermodel”, creates the probability field I wrote of.”
In the words of Richard Hamming: “The purpose of computing is not numbers but rather understanding.”
We are not trying to reproduce “climate”. We are trying to understand it. To that end, there will not be any one single model we use to understand all aspects of climate. In many ways the model in Foster and Rahmstorf 2011 is more effective in demonstrating that the discussion of a “pause” is fallacious than a GCM, precisely because of its simplicity. Models need to have a purpose–and the goodness of the model derives from how well it fulfills that purpose even more than how well it matches the data (e.g. one may construct a model precisely to demonstrate that it cannot match the data if you omit a critical process.)
I know many people like to aggregate climate stats at the monthly level, however even aggregating at the annual level is problematic for meeting the assumptions of linear regression. You need 2-5 year aggregations before the autocorrelations between successive values start to vanish.
Obviously autocorrelation can be accounted for, but to my mind, that really complicates the issue by dragging the discussion away from climate into short term talking points. As long as autocorrelation _needs_ to be accounted for, to my way of thinking at least, you are still talking more about weather and less about climate.
“We implicitly use the assumption, that the maximum probability ridge of the model output should somehow look similar to a smoothed line through the data points. ”
Some people do. Some people also take the least squares fit line (for the last 15 years, for example) as “THE” (one and only) trend while totally ignoring the uncertainty.
They believe that the latter indicates precisely what temperatures have “actually” been doing and use that to claim that temperature has plateaued or paused.
But there are also some (scientists) who understand that there is always uncertainty attached to those “solid” lines (which might not be as solid as they are portrayed)
Smoothing is not really a problem for those who understand its purpose and limitations.
It’s only a problem for those who don’t, especially those who see any smoothing that shows temperature headed upward at the end as an effort to “trick” the public into thinking temperature is increasing when they just “know” (based on the short term trend line) that it has “paused”.
Like the uninformed public debate about short term trends, the public debate about smoothing is largely (if not entirely) meaningless.
Worse than useless.
From the standpoint of explaining things to the public, climate scientists would be better off sticking with long term trends or better yet, just showing people what has happened to long term (~25 year) average over the last half century.
But can’t undo what is already done.
“From the standpoint of explaining things to the public, climate scientists would be better off sticking with long term trends or better yet, just showing people what has happened to long term (~25 year) average over the last half century.”
Been trying to do just that–not that I’m a scientist. It’s tough, though, given the amazing volume of rebunking of the ’15-year pause’ meme. A lot of folks ‘just know’ it *has* to be significant…
The desire to look at shorter periods stems precisely from the fact that they are less reliable. Noise in the short term is the only way you can maintain that the long-term trend has “ended” or “paused”. This is the same motivation behind the fun-with-Fourier crowd–after all in a cycle, what goes up must eventually come down. The irony is that these guys don’t understand enough about Fourier analysis to realize that it is precisely at the endpoints where finite Fourier series break down.
Yes, the persistence of some in misusing data is clear if indirect evidence of bad faith.
Allow me to add my “Welcome back” to the rest. On the issue of statistics training in physics, I’d have to say it really is pretty abysmal. I never had a formal stats class until I taught it to undergrads, staying a chapter ahead of them in the book. Other than that, I’ve learned statistical analysis techniques as I needed them. Fortunately, I find the subject interesting and fairly intuitive. Thanks for all your efforts.
Very good post as usual. A little heated at a the end, but well-argued nonetheless. It’s interesting how Clark said “wishful thinking” as if someone were just drawing the red line with a pen and no formula behind the smoothing. The other thing behind “wishful thinking” is the idea that those who endorse the concept of AGW want to see it happen simply because we’re so hungry to be right about something, and as if almost all of the research we have doesn’t already point toward the reality of AGW. I see Clark retracted that phrase in particular, but he didn’t give a convincing reason why.
Is this the answer to the unanswered question?
http://en.wikipedia.org/wiki/Runge%27s_phenomenon
“Fitting Ends”
— by Horatio Algeranon
The end of fits say “Warning!”
“Construction up ahead
“The precipice is yawning
And road, it has no bed”
This incident shows that methods of statistical analysis, including smoothing should be reported along with graphs and data. However, it’s probably too much to expect that bloggers post such details as graphs and conclusions are copied from one site to another. On the other hand, if I were thinking about criticizing a graph or analysis, I would surely want to go back to where the original analysis was presented and not make guesses about what was done.
In teaching undergrads, I find that it’s really important to insist that they understand experimental design and the units and axes descriptions on both theoretical and empirical graphs. Paying attention to such things and not just the shape of a curve is a big step forward for young students. The next step is having an understanding of statistical analysis and testing.
kinimod, measurements of things like temperatures/precipitation, etc are prone to instrumental noise (more in the past), but especially to sampling issues. To use your example, let us say that there are a billion telephone lines and we want to characterize the resistance/km and the noise on each. So we sample out of the billion, maybe 100. How do we know the 100 are representative? How many birds are sitting on each line when we sample, and more.
Then to push the analogy further, since resistance varies with temperature and there are aging issues there is going to be a seasonal variation, etc. Climate is not a spool of wire sitting in a lab.
A clarifying remark.
So we have basic measurement noise, sampling noise and on top of that the deteministic chaos noise with its high, medium and low frequencies. My view had been somehow reduced to the latter.
Embedded in that are the more or less (mostly less) periodic “oscillations” like ENSO and NAO. (They seem be so aperiodic – they shouldn’t be oscillations at all, rather “fluctuations”. Another matter.)
And hidden in all this is the effect of net energy uptake, fluctuating in its sum as well as in its distribution.
A miracle that we see a signal at all – but we do.
We really need a denser network of ocean and high atmosphere measurement points, as well as tighter tolerances in extraterrestrial radiation balance measurements…
Horatio (and an entreaty to Tamino, for this off-topic excursion) I’d LOVE to play and sing a song of yours! Given your brutally good talent at lyricism, I’m sure you can wax poetic about things other than the D-K afflicted. Please contact me, if you’re interested, and to Tamino, welcome back!
I went back to the Greg Laden post and saw that his version of the Nov temperature anomaly graph gives the type of smoothing, the citation for the method and a comment that this method gives results very similar to the Lowess method, all of this in the figure legend. So, it would seem that Paul Clark was confused about the source of the red (smoothing) line because he did not even read the figure legend (?)
A 6th-degree polynomial fit? Really? He couldn’t be more ridiculous if he was waring a bright red nose and coming out of a clown car with 50 of his closest friends.
When I was teaching linear alg. this semester, we covered least squares curve fits to data. I explained very carefully that we almost always use a line, rarely a quadratic, logrithmic or exponetial function, and that we essentially never use anything else.
How the hell to you justify fitting to a 6th degree polynomial?
Paul Clark would have failed the class.
“Give me four parameters, and I can fit an elephant. Five, and I can make him wiggle his trunk.” –John Von Neumann
… and with 6, he perches on a beach ball.
elspi, have you looked at The Manga Guide to Linear Algebra? If yes, what do you think about it?
This is the first I have heard of it. It sounds like more fun than the Anton or Lay, but I think you might want a little more rigor than what this seems to have. (I don’t have a copy of it so I am just looking at what the reviews are). Maybe it would be good as a supplemental tex rather than the textbook.
And if I recall correctly, Tamino did a post here which addressed this sort of thing. Basically, if you do a linear fit, you have two parameters to adjust. If you do a quadratic, you have 3 parameters to play with. And there exists a test that tells you if adding that extra parameter improved the fit enough to actually justify adding it. That is, your quadratic fit will always be better than a linear fit, but it is only really “better” if it improves the fit by more than a certain amount.
I doubt there is any data anywhere that is actually better described by a 6th degree polynomial rather than a lower order one.
John, there are many tests that determine whether an additional parameter adds information. One of the first was the Akaike Information Criterion. There is also the Schwarz or Bayesian Information criterion and dozens of others.
From a reply by Tamino to a comment (and repeated in other form a couple more times): “How many people get a Ph.D. in math (or physics or other hard sciences) but have no training at all in stats? Too many.”
True enough, in my experience – which may be dated. And the reason is partly as given by Tamino, that people look at basic stats and think “I know this already, it’s obvious/trivial, etc.”
Interestingly, Paul Clark has essentially issued a complete retraction on his website. Well, give credit where credit is due.
My experience may be dated as well (1986), but it wasn’t just physics departments that ignored statistic requirements. My degree was in ‘Theoretical Physics and Applied Mathematics’, but I did not have a single statistics course. Apparently the math department thought that real number theory and complex number theory were more important to applied mathematics than statistics.
Statistics is often a separate department and would love to have more students. I guess they were squeezed out of your degree requirements. University politics.
Tamino, you realize you are now on the hook for a smoothing class?
I thought they had a smoothing class over at Climate Etc.. Or maybe it was a smooching class.
Just want to comment on what a rare event it is for someone in the blogospheric climate debate to be as up front in admitting error as Paul Clark was here. This is especially true given the degree of snark in Tamino’s post.
Everyone makes mistakes. Hardly anyone, that I’ve seen, is as up front about showing accountability as Clark has been here. Technical acuity is important in these debates, but integrity and accountability is equally as important – and it is very much sorely lacking.
Joshua,
I’ll give him that, and it is deserving of praise. What is more, we all owe him a debt of gratitude, since we now get a mini-course on smoothing. Thanks, Paul!
I dispute that it is necessarily rare. Our host here was similarly magnanimous when Ian Jolliffe weighed in on the issue of uncentered vs. decentered PCA. Within the scientific community, there are plenty of folks willing to admit when they are wrong. The denialist community would rather double down on stupid.
Where I would criticize Paul–and I do so out of genuine concern for a smart guy who is obviously capable of learning–is in his predilection to assume the worst motivations based on limited understanding and evidence. This was evident in his post on the November temperatures. It is evident in his moon-mission denialism. And if he does not realize this tendency and address it, it will hold him back.
Napoleon once said, “Never attribute to malice that which can be explained by stupidity.” I would add that maybe you should understand whether something really is stupid before attributing it to stupidity. We’re not always the smartest person in the room.
snarkrates –
I have found it to be rare in the climate blogosphere – if not necessarily in the scientific community more generally.
Joshua, A Venn diagram showing the intersection between the climate science and the climate blogosphere would probably resemble a tangent.
I recall my student days (I’ve studied physics, but ended up in software development, which I found out a little more atractive at the time, and yet, still I find physics interesting and am lurking just for hobby), I’ve considered statistics somehow easy (e.g. I was comparing calculating means and variances to differential equations). And I was sooooo wrong. And I really do appreciate, what Tamino writes, because I’ve learnt a lot from his lessons – very well written indeed. And as for me, the greatest lesson was to realize, how easily I deceived myself with rushing to conclusion without thinking of what data really shows (and thus running into wrong conclusions many times).
Why do I want to smooth the data? It is a record of the behavior of a complex system. Raw, it may provide clues to forcing processes and heat transfer between the atmosphere and the oceans, including sea ice. Smoothed, we lose that information.
Smoothed, it seems to give us a more intuitive projection into how things are going and what may come next, but given the nonlinear feedback in the, system, reasonable estimates of the system’s future behavior can only come from a deep understanding of the system.
When does smoothing yield a deeper understanding of the system? And, when is smoothing more a tool of communication and presentation graphics?
Kudos to Paul Clark!
@Aaron-
Raw we have noise plus signal
Smoothed- we remove some kinds of noise.
Perhaps there are people who believe there is no such thing as ‘noise’.
I also find language about complex systems and non-linear feedbacks to be mystical incantations spoken with the intent of propagating a belief that if we don’t know everything about everything, we know nothing. I observe that this language is used to avoid making any kind of useful statement about the role of the ocean as a heat sink that might be subject to formulating a hypothesis, examining existing data and achieving verification.
Analysis of Variance is a well established branch of statistics- and its goal isn’t to describe every cause of variance, but the main ones, their contributions and the reliability of those estimates. Lean, Foster & Rahmstorf and now Kosaka and Xie have approaches that focus on determining what the major actors are besides radiative forcings and seem to do a reasonably good job of it. There’s little evidence of the kind that normally shows up in an ANOVA project that we’re missing something significant.
ANOVAs aren’t appropriate to data aggregated at too short intervals, however, as Tamino has been at pains to point out on many occasions.
An example, that made me more cautious about short term data, is the decrease of arctic sea ice volume. For 11 years, there was a spectacular precipitation towards an ice free summer in 2017 or so. This summer was the exception of the exception – the volume curve sprung back to almost normal decrease (still, of course, medium term decrease). This is no all-clear at all, it just shows, that the probability to end up disgraced with projections based on short term trends is near 0.5.
@Dave123,
What kinds of noise are removed by time series smoothing? Most of the” noise” was removed as thousands of data points were averaged together to get the monthly temperature.
As time series, inter-annual variations, such as produced by El Nino are noise by some measures, but they are also signals that indicate the processes that drive atmospheric and oceanic circulation. Changes between successive data points can provide information on (non-linear feedback) system stability. If we break the curve up into segments, take the mean of each segment and the standard deviation of each data relative to the mean, then the relative standard deviation of adjacent data points is a very good indicator of the stability of the system. This is a well established industrial technique that has been validated against thousands of physical systems, specifically including the manufacture of microprocessors since 1992.
It did not matter what some computer model said, what mattered was what is the real system was doing right then! Loss of Arctic Sea ice proves the climate models do not have good insight into abrupt climate events. This is because the climate system as defined and bounded in the GCM violates the assumptions of calculus. For example, equilibrium equations are used to calculate the flow of discontinuous media, when finite element analysis should be used. Violation of math assumptions results in models that go wrong suddenly.
By algebraically adding the standard deviations of the last data point, one can get a good estimate of if the system is “in control” or whether it is “out of control”. “In control systems” were functioning properly. “Out of control systems” blow up and kill people. In industry, when systems went out of control, alarms went off, and we got people to safety. We had to know if the system was unstable, before it actually blew up. We had to understand our non-linear feedback systems, and we had to know if they were stable on a second by second basis. As a rule of thumb, if 6 standard deviations accumulate without the data series crossing the mean, then the system is out of control. This may be 1 data point 6 sd off the mean, or a series of 6 consecutive data points, each 1 sd off the same side of the mean, or 3 points that are 2 sd in the same direction, or something in between. (See the work of Ed Deming)
In Climate Science, “Alarmist” is a slur. In industry, Alarms are something that save people.
If we sit down with the full GISS monthly temp data base and run stdev on segments of the data, it is clear that there are places where we are 6 sd off the mean. By any standard, 6 sd off the mean is out of control. The truth is that properly charted, the November Temperatures jumped the rails in 1977. Anybody that looked at the 1977 to 1981 numbers knew that the climate system was not stable. Then, it stabilized for a bit, but in 1994, we have 6 consecutive data points each at least 1 sd above the mean.
BOOM! Nov 2001 was 6 sd off the mean. So much for any warming hiatus. 2002 was 5 sd off the mean in the same direction. At the time, I brought this up and was called an Alarmist! Nov temps have not crossed the mean since. We cannot draw any statistical conclusions because we have fallen out of any plausible distribution. We have moved from one climate system to another, very different climate system.
This is not news, this is the “Hockey Stick”, expressed in a different way. It means that right now, there is enough heat in the system to generate weather that mankind has never seen before. Sandy and Haiyan, were not flukes, they are what one gets when system heat is 8 sd over the 1880-1900 mean. And, the droughts are likely to be spectacular. We are 6 sigma past the Dust Bowel.
The last 12 years of GISS data scares me. No monthly series crosses its baseline mean, and everything is much hotter than baseline. Sea ice in 2007 says that the models do not know what Mother Nature is about to do.
Smoothing is to illustrate fairy tales to sooth children.
As an engineer myself, I would be extremely unhappy with the safe operation of a design relying on the evacuation of all personnel prior to its uncontrolled blowing up. And the testing you describe to give warning of such an incident was “a rule of thumb”?!! Ed Deming would not be pleased with what you describe! It is not quality control as I’ve ever heard it described before.
And applying your “rule of thumb” to climate data series is, I would suggest, not a useful method of analysis.
I would also suggest that the inability of climate models to model trends in Arctic sea ice is not to do with “not hav(ing) good insight into abrupt climate events” but rather in not having full understanding of the physical melting/freezing processes of for Arctic sea ice, processes which involve significant amounts of feedback.
John- point noted. Still, I look at all those studies that point to something in Pacific Ocean and argue we’ve got the significant factors identified- regardless of whether the technique used is formally classified as ANOVA.
Aaron- I did not get where you were coming from, but I’m going to suggest that from my experience in simulation, we don’t know what we don’t know and have no means of seeing real warning flags.
About 20 years ago know I built a very successful kinetic/thermodynamic model of an exothermic chemical reactor. The simulator built on the empirically determined rate constants, heat transfer coefficients at one scale, and mass transfer coefficients worked very, very well in terms of predicting the results in the plant. (very profitable results).
While we all knew that too high feed rates could lead to runaways, one of the surprising discoveries was that too low feed rates could do the same (by allowing unreacted feed stock to accumulate to create a “too high” situation.
I was never able to say “ah, here’s the tipping point” and “here are the things to watch for”. One moment the temperature profiles looked normal, then next ‘boom’. ( I made a game of this for educating the public about what we were doing- pointing out that having the simulator allowed us to set safe operating
boundaries.
So I appreciate the concern. Methane clathrates worry me (even if they don’t worry Gavin), as does some sudden discharge of accumulated greenland meltwater, loss of a major section of an ice sheet, or death of an unrecognized organism that promotes CO2 uptake in the oceans. But I don’t think we’ll have much if any warning on those things….certainly not from tracking temperatures. I don’t think that the temperature data are where you look for this stuff either. That’s too crude a tool, and already “smoothed” by the sheer mass of the global system. The signal we’ll figure out after it happens will be a current shift here, an ecological shift there. If you want to, have a program towards this- expand what’s been observed, and don’t look to simple temperature records for the key. In fact consider the idea that we could have a very troublesome, sudden shift in rainfall distribution without altering the GMST trajectory at all.
So look to Hadley cell boundary migrations and stability, jet stream stability…(and does anyone think we’ll be able to forecast the development, duration and impact of meanders in the Jet Stream?) What glacial mass loss and outflows. “Listen” to glaciers for the groans they make under stress and transit. (Anything that might be borrowed from the people working (unsuccessfully so far) on earthquake prediction could be useful here. Look to existing modeling outputs for transitions on other variables than temperature, and trace those back to something that can be measured besides temperature.
In other words, look just about anywhere but GMST.
Aaron Lewis,
Averaging is a smoothing technique. It removes high-frequency noise, revealing longer-term trends. The standard deviation is simply the square root of the second moment of the distribution, and the distribution in this case is not stationary. Your sample is not i.i.d. Of course you will have large variations off the mean of past performance.
What you are failing to grasp is that the goal is not to “reproduce” the system, but rather to understand it–specifically the processes that are important in its dynamics. Some of those processes–in particular those that are threatening to eat our lunch–are long-term. Smoothing will better reveal the characteristics of those processes. It will also give us better models off of which we can model the “noise”. I doubt anyone will find the message that emerges from our smoothing exercise “soothing”.
Merry Christmas, everyone. Thanks for the insight, feedback (no pun intended there) and humanity. Best to all in the New Year.
Same to you, Kevin, and thanks.
snarkrates wrote: “What you are failing to grasp is that the goal is not to “reproduce” the system, but rather to understand it–specifically the processes that are important in its dynamics.”
This seems to be either poorly stated or ill-thought. Of course one of the goals is to reproduce the system. In point of fact, failure to reproduce is a sign the model is deficient and with GCMs, for example, benchmarking often includes the ability to reproduce paleo climates and/or the ability to reproduce the last 100 years of global temperatures.
That said, how much has been gained since Hansen’s early, simple climate models? Applying new knowledge (or better understanding) of the actual physical processes is not necessarily going to lead to “better” GCMs. One of the takeaway points from Wolfram’s A New Kind of Science is that even simple rules can lead to complex results. So adding layers of complex rules and constraints to a program often yields little or no additional benefit.
One of the other significant findings from Wolfram is that small changes can lead to drastically different results – not because of sensitivity to initial conditions per se, but because of the inherent nature of some systems even using very simple rules.
So, if “reproduction” of system behavior is the goal, why not fit 3 points to a quadratic, 4 to a cubic…. This is guaranteed to return an R^2 of 0. It will also yield no predictive power whatsoever. You are correct that adding complexity may not necessarily improve predictive power. What you are looking for is maximizing predictive power given the data you have to work with. Matching past results is not the goal. Predictive power requires understanding the important contributors to the system.
If you think that is not “well thought out,” you haven’t understood it.
This is an interesting point. There is an unlimited number of models, which reproduce the past temperature plot well. The task is, to find out the subset with the highest predictive value. (Of course, there is a much more unlimited number of models, which don’t reproduce the past well, and a very tiny fraction of those will even give good predictions of the future, but more in the sense of ten zillions of monkeys typing randomly on a keyboard.)
A good simulation model not only uses the mere time series of temperature, but all the information available about subsystems and their behaviour and relations to one another. So the good computer models actually use – and reproduce – a body of information, that is much, much larger than the bare temperature time series and which is hidden behind the latter, so to say. This body of information contains other details, which can be – and are – used, to evaluate the models,e.g. stratospheric temperatures.
Fortunately, the conservation of energy allows us to divide the earth system into a few big heat containing domains – like outer space, atmosphere, land masses, upper and lower oceans, ice shields, big circulation patterns – and set up power flows between those. With this raw model, we get a raw behaviour, and with skill and luck, the approximation to reality is still satisfactory. This is what Snarkrates means with “understanding”.
This is the art of physics.
But as it is chaotic, the predictive power of it is limited.
Coming back to smoothing, a good smoothing algorithm IMO is one, that is based on such a simplified earth model.
> Of course one of the goals is to reproduce the system
Why “of course”?
The modelers are well aware that climate is an emergent process.
That’s why multiple runs are done with the models, to see what emerges, and get a range of likely outcomes.
Has anyone claimed to be attempting to write a climate model that will “reproduce the system”? I”d welcome a pointer to read up on that.
But if you’re assuming that’s true — it’d be a good assumption to check.
From IPCC FAQ 8.1 “There is considerable confidence that climate models provide credible quantitative estimates of future climate change, particularly at continental scales and above. This confidence comes from the foundation of the models in accepted physical principles and from their ability to reproduce observed features of current climate and past climate changes.
snarkrates, note that I said “one of the goals” – not the only goal.
Hank, “of course” because if it *can’t* reproduce observable features, then it isn’t of much use.
Kevin, you are confusing goals with evidence. The agreement of a simple model with data provides evidence that the model contains elements of the truth. However, the real sine qua non is correct prediction of new behavior.
If all you want to do is reproduce the behavior, a quartic will do to fit 5 data points. If you want predictive power then you have to go with the simplest model that adequately matches the data or you have to go with the known physics. Again, you haven’t understood why we do physical modeling.
P.S. The word ‘reproduce’ is used 53 times in the text of AR5 Chapter 9: Evaluation of Climate Models This count does not even include the many instances where other words and phrases are used as synonymns. Anyone that does not think reproduction is one of the goals of climate modeling, or one of the methods used to evaluate models, has not read the IPCC report or skipped chapter 9 (Chapter 8 in AR4).
Another clue:
“The purpose of computing is insight, not numbers.”–Richard Hamming
“If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.
So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. ”
http://www.skepticalscience.com/climate-models.htm
snarkrates, you should contact SkepticalScience and let them know they have this wrong – the ability to reproduce past climate changes has no correlation with predicting future events and GCMs exist only for insight.
Well, there’s this: http://iopscience.iop.org/1748-9326/8/1/014047
which suggests that taking the path of an egregiously stupid fossil carbon overshoot (BAU) is more likely to badly screw up Europe’s climate, compared to taking paths that are smarter about promptly reducing fossil fuel use. It’s not -just- the total amount of fossil carbon burned that makes a difference in the system.
The task is, to get the best predictions available. I believe, that there is a correlation between the quality of reproduction of the past and the quality of prediction. This is a heuristic belief of me – some smarter mind has probably proved it somewhere sometimes. This is the rationale of reproducing the past.
Understanding what happens – as beautiful and desirable as it is – is only possible, if the system can indeed be reduced from the thousands of degrees of freedom of the computer models to the seven plus or minus two(*) degrees of freedom, the human mind can handle and still keep its model – reality – match sufficiently. It might come out, that this is not possible, or only possible for certain very limited task.
(*)https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two
“7 ± 2 degrees of freedom?!!” Wouldn’t that then allow the apocryphal elephant to trumpet the national anthem while at the same time writing out the Pentateuch in immaculate copperplate and juggling a lemon!
It is 7 ± 2 items.
kinimod says: “Understanding what happens… is only possible, if the system can indeed be reduced from the thousands of degrees of freedom of the computer models to the seven plus or minus two(*) degrees of freedom, the human mind can handle …”
I think you’re misapplying the work of Miller. While it may be true that we may have difficulty storing a random string of 10 digits in short-term memory, we don’t have to rely on short-term memory to understand climate. We can study individual slices of the pertinent science, make notes, read scientific papers and keep them for reference, write computer programs to see how different variables interact, etc. At any time we can refer back to work already done to refresh our memory – and much of the information will be in long-term memory after sufficient study.
Google Akaike Information Criterion and related quantities in information theory. There is a balance between goodness of fit and model simplicity. The heuristics compete and balance.
George Miller, my dissertational grandfather (advisor’s advisor), BTW :0 !!!, never talked about 7 plus or minus 2 degrees of freedom to the best of my knowledge. The human brain us limited, yes, by offloading complexity in various ways, we can encompass a lot more than 7.
BTW2: Later estimates put STM capacity closer to 5.
Not really my stat area, and I’m wondering if it is someone’s here: If we treat the canonical series as the equivalent of clinical trials and monthly/yearly looks as interim analyses (planned/unplanned), for how many years must the observed prior trends have to flatten before we have to reject those prior trends? Of course this ignores factoring PDO/etc. but then many blog types are.
This seems to be basic to a lot of the “hiatus” talk as so many are saying “I looked each year and this year we found significance”. Trends instead of groups, I realize, but still somewhat the same problem as discontinuing a trial based on interim looks.