His method? A favorite climate denier trick: the one called “cherry-picking.”
Clutz uses yearly average sea ice extent in the Arctic according to satellite data, from 1979 (the first complete year with satellite data) through the 2017 (the most recent complete year). He plots the data like this:
The white vertical bars are the yearly values, while the red line is Clutz’s idea of the “trend.” He has already chosen a dreadful graph to show the data. The entire amount of change over the nearly 40-year time span is compressed into a slice only 1/7th the height of the graph, which makes it much harder to see the real changes, which, it seems to me, serves Ron Clutz’s purpose perfectly because I don’t think he wants you to see the real changes. If you can see up close how his claimed “trend” matches the data, you might know how fake it is.
Then Ron Clutz gives us his interpretation:
There was a small loss of ice extent over the first 15 years, then a dramatic downturn for 13 years, 6 times the rate as before. That was followed by the current plateau with virtually no further loss of ice extent. All the fuss is over that middle period, and we know what caused it. A lot of multi-year ice was flushed out through the Fram Strait, leaving behind more easily melted younger ice. The effects from that natural occurrence bottomed out in 2007.
Let’s look at the data more closely, and instead of deciding what we want it to say, let’s listen to what it’s telling us. Here’s a vastly better (i.e. more informative) graph of the same data:
One way to get an estimate of a non-linear trend is with a good smoothing method, and I like my own program for a lowess smooth because it also compute the rate of increase or decrease at each moment of time. It gives this (the trend estimate shown as a red line):
There are many other choices, and with a competent analyst I could have a fruitful, perhaps even heated (but civil) discussion about their merits. But not with Ron Clutz.
Here’s his idea of the trend (the blue line):
He has split the time span into three segment. The first is from 1979 through 1994, but he doesn’t estimate its trend by least squares regression, or by any other sensible method, but simply by the line from the first to the final data points. Essentially his method completely ignores everything that happened in between. My opinion: it’s because Ron Clutz wanted to ignore that stuff; only by doing so can he get the early rate of change he wants. He’s free to choose a high point as his ending year, in order to make the earlier trend rate less negative and the following trend rate more negative. That’s what he wants the data to say — not what the data are trying to say themselves. Most of the data values in that time span say otherwise, lying below Clutz’s trend line, but his method ignores them. I prefer what the data say.
His second segment is from 1994 through 2007, and his claim of what the trend was doing during that time is … well, there’s no polite way to put this … stupid. Of the 12 values in between those years, 11 of them are above his trend line. To me, it’s just obvious that Clutz chose this line not because it reflects what the data are saying, but because by choosing an extra-low ending point he can make his “fast decline” episode longer than it should. I prefer what the data say.
And of course, using 2007 makes the next segment start on an extra-low point, helping him impose what he wants rather than what the data are trying to say. His final segment, from 2007 through 2017, also utterly fails to reflect the actual trend during this time span, mainly by starting with the extreme low value in 2007. That’s not just cherry-picking, it’s cherry-picking supreme. Shame on you, Ron Clutz.
I’ll also repeat my opinion: part of the reason he used his original dreadful graph is that it makes it so hard to see the details of his purported “trend” that he can sucker people with his fake trend. When you can see clearly how Clutz’s trend compares to the data, you can see clearly how ridiculous his claim is.
Compare my trend estimate (a lowess smooth) to Ron Clutz’s (which depends on deliberately choosing time points and ignoring everything in between them):
Which do you think gives a more realistic portrayal of the data?
As I mentioned, my lowess program returns the estimate rate of change at each moment, and it also estimates the uncertainty in that rate estimate. That enables me to track how the rate of change (the rate of Arctic sea ice loss) has changed over time. If I compare my estimate (red line, with dashed red lines showing the 2-standard-deviation uncertainty range) with Ron Clutz’s rate estimates (blue line) I get this:
This shows us clearly just what Ron Clutz has done with his “trend.” From 1979 to 1994, he chose points that would give an artificially low (slower decline) trend estimate. From 1994 to 2007 his chosen (cherry-picked) points give an artificially high (rapid decline) estimated rate. From 2007 to the present his cherry-picked starting point, combined with the amateurish attempt to estimate the trend by just connecting the endpoints, enables him to get an artificially low (slower decline) trend estimate. That’s how he justifies his entire “analysis.”
I too can play the “pick two moments to get what you want” game. Here, for instance, is an alternative choice (I’ll call it the “other cherry” choice):
Just as Ron Clutz can choose times to make his third episode decrease most slowly of all, I can choose them to make the third episode decreasing fastest of all. Just as Ron Clutz can choose times to make this second episode decrease fastest of all, I can choose them to make the second episode decrease most slowly of all. Ron Clutz’s “pick a few time points and connect them with lines” strategy, when done to get what you want instead of to get the truth, can produce just about any result you want. My opinion: that’s exactly what Ron Clutz did.
Heck, I can even take his time points — 1994 and 2007 — and estimate the trend, not by connecting endpoints, but by finding the best-fit continuous piecewise-linear trend by least squares, then use those to estimate the rates of change. That gives this:
Clearly, even if we allow Ron Clutz’s misleading choices of change points for the trend, you still have to use an ignorant and misleading way to make a trend out of them to get what Ron Clutz wants.
Here’s my opinion:
We’ve dealt with Ron Clutz before, in another example of pretending to do “analysis” but not really doing it. Clutz finds it easy to justify ludicrous opinions because he doesn’t know how to do the analysis. We get the same kind of thing from Cliff Mass — repeatedly. They, and others, push out garbage “analysis” regularly, and even when it’s proved wrong they refuse to admit any error. Frankly, it’s very difficult to have a productive discussion about analysis with someone who knows nothing about it but think he does. Honesty is off the table.
Anyone can make mistakes. For example, Roy Spencer made a doozy of a mistake some time ago, but when I called him out on it he did not insist that his bonehead mistake was correct. As far as I know, he hasn’t mentioned it since, and I suspect that’s because he knows — and admitted to himself — that it was wrong. But too many others, like Ron Clutz and Cliff Mass and Anthony Watts and Christopher Monckton etc. etc. etc., will put out ideas that are so idiotic they’re embarrassing but refuse to admit error even when it’s proved. You can’t reason with those people.
But hey, that’s just my opinion.
This blog is made possible by readers like you; join others by donating at My Wee Dragon.
In some kind of ‘fairness’ to Ron, his chart does show 0 to 14 in M km2, not just 10 to 12.5 M in km^2 as in the four other graphs chosen for this article. Both are ways to ‘cherry pick’, tho his are ways to belittle the problem while yours make greater the problem. Still each shows that 1/6th of the ice has been lost in less than 40 years so belittling the problem is obviously the fools errand.
[Response: His graph axis scale wasn’t the cherry-pick, it was just a bad choice. His cherry-picking was selecting change points which, when combined with a silly way to draw trends, gave him the result he wanted.
I didn’t cherry pick at all.]
Lou, I think you completely missed the point that Tamino was making there. By choosing his axis, Clutz appeared to be trying to hide the actual data making his trend lines look reasonable (to deniers who want to believe everything’s fine). With Tamino’s graph it is abundantly clear that Clutz’s “trend” lines are ludicrous and, therefore, so is his “analysis”.
In addition, of course, the choice of graph can make it look like there is plenty of ice to go, so no worries but the minimums and thickness of sea ice are also very important and what is happening to those is much more difficult to hide. So going for average yearly extent (even though 2016 was the lowest on record) is another cherry pick.
Of course, the ‘money’ moment for him is this:
That was the apparent ‘holy Grail’–the moment when Arctic ice loss could be ‘attributed,’ however erroneously, to a ‘natural occurrence’.
However, real study of the Fram Strait region reaches quite different conclusions. The study below finds that as of 2012 (the end of their study period), the trend of ice loss through the Fram was still increasing.
Click to access tc-10-523-2016.pdf
(See their Figures 7 & 8.)
Additionally, it documents that the ice being exported via the Fram was growing younger (Figure 3) and thinner (Figures 4 & 5) during their study period.
Wonder what “natural occurrence” Ron would attribute that to?
Haha, he only cherry picked 1979 when the ice was at its highest maximum in years. The idea that data was not available prior to 1979 is the real BS! Honesty at its finest, but hey, it’ll get you more finding so who cares right?
Show the graph with another 50 years tacked on, then talk smack about cherry picking. Ridiculous…
[Response: Hahaha! I’ve already done that:
More than once.
It’s not his choice of 1979 that’s a cherry-pick — it’s his choices of 1994 and 2007. He adds in, for bad measure, a ludicrous way to estimate trend rates.
We don’t play the “Gish gallop” game here — so before you try to change the subject, you can either provide some actual evidence that’s on topic, or you can admit you’re wrong. I doubt you’re honest enough to do that.]
[Response: Is that what you call evidence? An essay by a political hack? All you show is that your sources are as dumb as dirt. Correction: dumber than a bag of hammers.
Clearly, Ron Clutz’s essay is for suckers. And for you (but I repeat myself).]
Please continue with your ad hominem attacks. That totally invalidates any discussion of facts. Please, no really, continue to refute logic with personal attacks. I await your witty response with bated breath…
[Response: Just as I thought: you’ve got nothing. Nada. Zip. Squat. The only “reference” you can provide is a link to a piece by a widely known political hack and climate denier. Since you’ve got nothing scientifically, *you* resort to accusing *me* of ad hominem (that’s rich!) after *you* started your initial comment by insulting me, including insinuating that I was only telling lies to get money.
You’re not just a sucker, you’re a first-class hypocrite. Which makes you typical of climate deniers.]
Interesting footnote on James Taylor…
“Heartland plays an important role in climate communications, especially through our in-house experts (e.g., [Heartland’s James] Taylor) through his Forbes blog and related high profile outlets, our conferences, and through coordination with external networks (such as WUWT and other groups capable of rapidly mobilizing responses to new scientific findings, news stories, or unfavorable blog posts). Efforts at places such as Forbes are especially important now that they have begun to allow highprofile climate scientists (such as Gleick) to post warmist science essays that counter our own. This influential audience has usually been reliably anti-climate and it is important to keep opposing voices out“
– Strategy document from Heartland Institute 2012.
Tamino did not refute logic with personal attacks because Shane presented exactly zero logic. And zero facts to discuss, for that matter.
Ok, so don’t address the elephants in the room then. Continue to ignore my legitimate questions based on facts.
1) How you compare estimated data in your linked articles with measured data and draw very questionable conclusions (which we both know is a no-no in any real science experiment or comparison) Oh, right it fits your obvious agenda.
2) Why you picked 1979 as your starting point. Again, a fact which you and your disciples refuse to acknowledge.
3) Why you feel that 40 years of data are enough to make a prediction on a planet that is so old we’re talking milliseconds in relative time?
If you want to logically respond to these items, we can talk, otherwise you and your followers just look foolish and are reality deniers/political pawns. So, keep saying I have no facts as you sit in your echo chamber with your sycophants and refuse to address logical, scientific questions. I’m done with your denial. In 20 years when we still have ice at both poles you’ll have some other new excuse and all the people you’ve called names will just be laughing at your mental gymnastics as we currently do.
After reading your linked articles and responses to other legitimate questions, it is clear to me that you have no interest in the truth, even using logic on others that could be used to criticise you’re own findings (ironically).
I’m done. (You won’t answer any of my three questions; it’s far easier just to attack me and call me names.)
[Response: I’ll make a deal with you.
Re-post your three questions, but without the insults; things like “Oh, right it fits your obvious agenda” and “a fact which you and your disciples refuse to acknowledge” and “it is clear to me that you have no interest in the truth.” Then I’ll answer them, without insults.
Your continued attempts to play the victim of insult, when you came out swinging with insults from the very start, only shows extreme hypocrisy.]
It’s not just Tamino who picks 1979 as the starting point; so does Clutz. So does Taylor. Taylor tells us why: “since the satellite instruments began measuring the ice caps in 1979”. Asking about that is a red herring.
In his initial comment Shane asserted that “the idea that data was not available prior to 1979 is the real BS!”
Shane seems to be blissfully ignorant of the *fact* that satellite coverage of the ice caps started in 1979, meaning starting an analysis in 1979 at the start of the data set is most definitely *not* a cherry pick.
And then when directed to two prior posts where Tamino had indeed incorporated non-satellite data sets from prior to 1979, Shane has the audacity to assert that “you compare estimated data in your linked articles with measured data and draw very questionable conclusions (which we both know is a no-no in any real science experiment or comparison).”
Priceless, simply priceless.
Disciples, followers, sycophants, and political pawns indeed. Project much?
Global sea ice was close to the long term average in 2015 as your reference says because the Antarctic was unusually high. The discussion in this post is about Arctic sea ice which has been in steady decline as Tamino’s graph shows.
Since 2015 the Antarctic has substantially declined and the last three years have been by far the lowest on record. The graphs here: https://sites.google.com/site/arctischepinguin/home/global-sea-ice show that global sea ice set a record low (by an enormous amount) in 2016 and 2017 was the second lowest ever recorded by a lot. 2018 is currently tied for the lowest ever recorded global sea ice.
Too bad for the deniers that the Global sea ice has declined so much in the past three years so you have to use outdated articles to support your incorrect claims.
I wonder if the last three years of low Antarctic sea ice has resulted in a significant change in the slope of the sea ice extent graph there.
….and away went the sock puppet?
Apparently Shane thought those were ‘tough questions’, which as much as anything else dramatizes the depth of his ignorance.
Just to summarize for lurkers:
1) Is just wrong; ‘real’ and ‘estimated’ data are often used together in many disciplines where it’s helpful.
2) Is a ‘baby question’ just about everyone but Shane already knows the answer to.
3) Is a silly misframing of the question. Two indicative responses:
a) The ’40 years’ includes the period during which the ‘anthropogenic signal’ has been evident, and documents that qualitative predictions made prior to the beginning of that period were correct.
b) Sea ice observations are but one piece of a very, very large puzzle. That Shane imputes ignorance of that puzzle to climate science shows, again, the depth of his own ignorance, since there is an enormous volume of research on all aspects of that puzzle extant. Further, Shane could, with very little trouble, have made himself aware of it.
Shane, your motivated cognition, and your dishonest rhetorical tactics, have been ably evaluated by Tamino and others. I’ll just call out your misuse of ‘ad hominem’:
It would be the argumentum ad hominem to dismiss your factual claims simply because you’re the one making them, rather than on their technical merits. That’s not what’s happening here. Your ‘facts’ have long been probatively shown to be false, and your logic incorrect, yet you adhere to both, while showing little grasp of the referred science underlying the explanations. It’s reasonable on a public forum to ask what cognitive motivators are distorting your thinking. Our speculations on those may be incorrect and even uncivil (I, for one, get pretty tired of whack-a-troll), but they’re not ad hominem.
And if it’s not clear, your link to James Taylor was dismissed without analysis because Taylor is a professional opinion columnist with an avowed Libertarian political agenda. His cognitive motivators are transparent, thus there’s no reason to take his factual claims at face value. That’s why scientific debates are not conducted in self-proclaimed “Capitalist Tool” mass media.
Drat! By ‘referred’ science, I meant ‘refereed’ (i.e. peer-reviewed) science.
Further to Michael’s point Taylor is a skilled propagandist. He’s also an extreme cherrypicker–something shane purports to dislike. He uses the goddard/heller method: Find an isolated factoid which is true enough, remove all context, and then hope the reader infers a global conclusion in line with his propagandist agenda.
By shane’s/Taylor’s context-free notions here “polar” ice extent on the Antarctic continent simply does not decrease AT ALL till the complete 2.2 km average–nearly 5 km maximum–land ice cap melts all the way to ZERO. Utterly ridiculous once context is added. But of course adding context is exactly what no propagandist wants.
So for my part I would ask shane the following: Should Antarctica lose, say, 1 km of ice but still remain ice covered to about the same extent do we have a problem with the effects of warming or not?
Nah. It’s just too easy.
Tamino, you are my hero and I wish I had taken statistics from you. Just like that bullchit Tony Heller figure, it was obvious to me these guys are out of their league even compared to someone who just knows how to do download, analyze and plot data…and I’m not even a stats guy. I just can’t wait till they start cherry picking different parts of the Arctic in SSMI data! Maybe we will see record Bering Sea concentrations recycled again?
Ron Clutz is guilty of far more than misrepresenting the reduction of the annual average SIE. His grand thesis comprises evident bullshit from start ot finish.
He proposes that the big loss of SIE resulted from multi-year ice being swept out of the Arctic through the years 1995-2007 leaving more 1st year ice which, being easier to melt than older ice, results in a greater loss of ice through the melt season. Yet this is denialist gobshite. The Arctic Ocean pretty-much freezes over through the winter with the amount of SIE to melt constant through the period. Since 1984 NSIDC have been tracking how much of that near-constant level of SIE is what age. (See graphic here usually 2 clicks to ‘download your attachment’.) So did the level of multi-year ice remain roughly constant except for taking a nose-dive through the years 1995-2007? No! Clutz is telling porkies!
The level of multi-year ice in the Arctic Ocean at the end of the freeze-up has been decreasing at a roughly linear rate from 1984 through to 2018, decreasing from roughly 5M sq km to 2½M sq km. While there is a bit if a drop due to the 2007 melt season, it does not feature significantly in the grand scheme of things. There is certainly no sign of multi-year ice remaining roughly constant with some decline 1995-2007 as Clutz would have use believe. He actually made up this fantasy period of decline up like the true fantasist he is.
Clutz also references Kwok & Rothrock (1999) but, excepting the role of eye-candy, the reference does not support the fantasy of Clutz. The data does not even properly cover the period of Klutz so-called analysis, covering only the years 1979-96, this should be no surprise.
Thus the account Klutz provides is nought but simple bullshit.
It’s like the Goofus and Gallant of trend-line estimates.
“was flushed out” is so wonderfully passive!
Yes. It recalls the classic argument for the existence of God: since everything mundane has another mundane cause, mundane causes form a logically abhorrent infinite regress, and there must hence be an uncaused ‘First Mover.’ For the likes of Clutz, or for that matter Akasofu, ‘natural occurrences’ don’t need a cause; we can stand before them, amazed yet content that ‘it ain’t our fault.’ Idols of the mind!
This seems a fitting ( :) ) cartoon: https://xkcd.com/2048/
Tamino, I wish you would have a go at Kip Hansen some time. He promulgates his wild conjectures about error theory and observations and statistics, unaware that Gauss solved these problems 200 years ago. Nick Stokes tries sometimes to get things back on track, but Hansen has successfully introduced so much confusion that Nick seems to have tired of the struggle.
BTW, we just apparently reached the annual minimum, at least WRT the JAXA sea ice extent metric. As I posted at Neven’s sea ice forum:
It’s been rather an interesting picture during the last couple of years: during winter, the size of the Arctic pack has frequently been at an all-time low, but summer extents have declined rather less than usual (this year, primarily thanks to a cool June, according to NSIDC). Since the annual minimum is the ‘headline’ number, where over longer time scales the decline is greatest, it’s suggested a picture of ‘ho-humness’ that’s a bit misleading. Essentially, the ‘dice’ have been falling in favor of conserving summer extents. But as always, luck will turn eventually, and then lots of folks will be very, very shocked.
It could even happen next season, which would potentially validate the “Maslowski window” of a nominally ice-free Arctic Ocean in 2016, plus or minus 3 years. Probably, that won’t happen–or at least, not sufficiently to get to sub-1 million km3, which is the frequently used ‘ice-free’ criterion. But I think it is still very likely to happen before 2030, which is still a tad more aggressive than the mainstream modeled projections. (That’s an intuitive guess on my part, but it’s informed by the fact that there’s such high variability–you only need one year with weather conditions like 2012 to happen before the projected ‘best estimate’ year–and meanwhile the ‘loading of the dice’ continues to grow apace.)
“Probably, that won’t happen–or at least, not sufficiently to get to sub-1 million km3, which is the frequently used ‘ice-free’ criterion.”
That may be the climatological criterion, but I think it’s valid for other disciplines or industries to establish other criteria, both higher and lower, to reflect what ice cover “means” to them. For shipping it might be whether an icebreaker ship is needed. For meteorology it might be whether certain weather patterns can form from heat rising from the ocean. Even different megafauna (walruses, seals, polar bears) have different functional reliance on ice cover. For Heartland Institute members, it might be when all of Greenlands northern fjords are ice-free. :-(
Sure. For that matter (and as I understand it at least), the one million criterion isn’t even ‘official,’ just one that has attained a certain amount of currency.
Since “sea ice” cover often counts kayakable waters, I prefer Andy Lee Robinson’s wonderful depiction of sea ice *volume*:
Above, I stated that it is not unusual for all sorts of researchers to make use of both modeled and empirical ‘data’. Here’s an example (pretty much random) from epidemiology:
Click to access JEP20122900003_56577732.pdf
OT, but what the heck. Someone else at WTFUWT? (initials JS) has posted a figure of sea level rise for San Francisco. It covers the period from 1980-2013 inclusive (34 years or 408 months, that figure is mislabeled as “1980-2014” but does NOT include the end year 2014). That specific 34-year trend is NEGATIVE! In fact, it is the lowest 34-year trend line possible for the years 1897-2018 inclusive (1897-01 through 2018-08). A rather 100% clear cherry pick.
I’m using this NOAA dataset …
Note: I would have dropped this comment in the recent open thread, but that thread now appears to be closed. Sorry, for posting OT, but I know you post quite often on SLR.
If a cricket has a rasp but no file in it’s stridulatory organ, does it make a sound?
Not necessarily, and not sufficiently. :-)