Both NASA and NOAA report 2014 as the hottest year on record. Despite the new #1, neither the news itself nor the response to it has surprised me.
The news that last year was so hot is certainly no surprise. The simple reason is that for the last 15 or 20 years, let’s say since the turn of the millenium just to be specific — you know, since back when we expected global warming to continue without slowing down — global warming has continued and shows no sign of slowing down.
As a matter of fact, according to the NASA data this year is dead on the projection we would have made back then by using the “global warming continues without slowing down” hypothesis. By the time 1999 came to a close, a warming trend was abundantly clear:
But besides just rising with the trend, temperature fluctuates up and down from year to year; hence the dashed red lines above and below the red trend line showing the likely range. Assuming global warming continues without slowing down, we would have expected this:
This is what actually happened:
Just what was expected, that’s what actually happened.
That kinda lets the air out of any idea that global warming has slowed down — the data simply do not support it. But in a clever misinterpretation, one ultimately rooted in cherry-picking, there are those who actually claim that the data do demonstrate global warming has slowed down.
Unbelievable as it may seem, they actually do. While the data have followed the global-warming-continues-without-slowing-down pattern just about as closely as one could have imagined — this year right down the middle — they still insist that global warming has slowed down. They like to call it a “pause.”
The reaction of the “pausemaniacs” to the record hottest year has mostly been protest. Breakin’ some temperature record just don’t mean a gosh-darn thing worth payin’ no attention to. It only broke the record by a little bit. And besides, it ain’t the individual years, record hot or not, that count, it’s the pause that counts — a record hottest year don’t end the pause!
Methinks they do protest too much. Perhaps they fear that a record year really does threaten their beloved “pause.” But that’s not the real threat at all, it’s the fact that the data have followed the global-warming-continues-without-slowing-down pattern just about as closely as one could have expected, because all the while they’ve been bellowing about the pause that never was.
But the record year does do this: it makes it harder to sell the whole “pause” idea. And for many, it really is the “pause” that matters. It’s the propaganda keystone that enables Ted Cruz, new chairman of the Senate subcommittee on Space, Science, and Competitiveness, to make statements like this:
The last 15 years, there has been no recorded warming. Contrary to all the theories that — that they are expounding, there should have been warming over the last 15 years. It hasn’t happened.
I suspect that Cruz’s claims are what he has learned to say, again and again, from the experts at crafting persuasive “spin” for any issue, the spin that doesn’t just captivate the politician’s target audience, it captures the politician too. These days that spin relies heavily on the so-called “pause.” I can’t say I’m surprised by Cruz’s comments, or even claim to be surprised that some who claim to be scientists are actually prepared to defend such a statement.
Based on the vehemence of their reaction, the record year is something of an embarrassment to the “pause” crowd. Heck, the next time Ted Cruz says such a thing at a press conference some sassy reporter might ask him, “But what about last year being the hottest on record?” Perhaps Cruz will respond by saying he’s not a scientist.
They just don’t seem to realize that the real embarrassment to their precious “pause” is the trend.
Reblogged this on Hypergeometric and commented:
Just more reason why the results of Fyfe, Gillet, and Zwiers at http://dx.doi.org/10.1038/nclimate1972 which I wrote about at https://johncarlosbaez.wordpress.com/2014/05/29/warming-slowdown-2/ and https://johncarlosbaez.wordpress.com/2014/06/05/warming-slowdown-part-2/ look stranger and stranger. I increasingly think that the caution mentioned by Wilks, Toth, Zhug, Marchok, Palmer and others was ignored in FGZ’s work.
They are claiming: “It is more likely than not that the warmest year was a different one than 2014.”
[Response: It’s the trend, stupid.
And by the way, it’s still the hottest on record.]
Yes. It’s the trend.
We DO have to give Cruz “credit” for “honesty”, though. He/his staff got the new meme “correct” stating no warming in the past 15 years. Or close depending on what he means by 15 years. That is because, in the GISS data (aggregated annually Jan-Dec), the results now are as follows (R output):
Year || Estimate || Std. Error || t value || Pr(>|t|).
1995 1.1602 0.2896 4.006 0.000829 ***.
1996 1.1421 0.3217 3.551 0.00246 **.
1997 0.8989 0.3235 2.778 0.0134 *
1998 0.826 0.361 2.288 0.0371 *.
1999 1.1015 0.3675 2.997 0.00961 **.
2000 0.8429 0.3840 2.195 0.0469 *.
2001 0.4615 0.3633 1.27 0.228.
2002 0.3187 0.4132 0.771 0.457.
2003 0.4406 0.4818 0.914 0.382.
So the new meme is “no warming since 2001!!!”, not1998 (followed by “Benghazi!!!!11!!!1” no doubt). Sadly, this is PRECISELY what you predicted in a previous entry.
Although the probability of 2014 being the warmest was, by a long margin, the highest of any ‘candidate year?’ How stupid an argument is that? And by “stupid”, I mean “damaging one’s own argument by blatantly signaling a lack of good faith.” And good sense, for that matter.
But it doesn’t look stupid to people who are uncomfortable with uncertainty. The purpose of the argument is to tap into the discomfort of those people. They will be more uncomfortable accepting that 2014 was the warmest year because it’s only 48% certain. That’s not high enough to shut down the uncertainty region of the brain for them. Do you ever rent a movie from iTunes that is rated only 48%?
Martin, Morano has provided the very chart that contradicts his claim. But you know with that guy, thick head, thicker neck.
The probabilities he trumpets actually say that 2014 was the least unlikely not to be the warmest years of the listed years, and by a big margin. A little convoluted, but really not that challenging is it?
Can I steal your last graph for my blogpost on the pause?
Welcome back and Happy New year! I’ve been eagerly awaiting your comments on the heat!
Actually, the dominant “skeptic” response hasn’t been “it only broke the record by a little bit.” Instead, it’s focused on the “admission” (any routine disclosure of information that they think is favorable is an “admission”) that the probability of a record warm 2014 is “only” 38% (NASA) or 48% (NOAA).
They don’t seem to understand that these percentages are considerably higher than the likelihood that any other year was the record. This, to me, is really why we get to say that it was the record: Because we can’t realistically claim that any other year was. We can’t say “1998 was warmer!” or “2010 was warmer!”
Of course the Goddards, et. al. understand this. Just like the Tobacco Institute stats people understood their own manipulations in producing FUD in the service of supporting their employers.
People that make this mistake have serious issues contemplating the difference between something “deterministic” and something “probabilistic.”
“They just don’t seem to realize that the real embarrassment to their precious “pause” is the trend.” – They know full well and they are not embarassed at all, because facts are not their merchandise, doubt is, even if only for three days till debunk.
White collar criminality is the institutionalized climate revisionist industry and that is how it should be treated. I repeat: those revisionists need to be treated as criminals. For them serious rebuttal of their crap is their actual succes, time and time again, because it implies – to the public – simply one thing: that the debate is far from over, that the science is not settled et cetera. It is US who should, quite authoritively so, quit the debate with the revisionists. It is US who should say ‘f#ck off’ and nothing more to all those who present the same old denialist talking points.
Back in the time Michael Mann should’ve replied ‘FO’, and nothing more, to Joe Barton’s inquisitional mailing to him, and to let Barton just bring it on. Barton’s utter succes back then was that he was responded to in a respectful, serious manner – causing actual cost to climate research, too.
We are ten years on and still WE are helping them to sow and raise doubt. WE have taken Inhofe up to his today position, Ted Cruz to lead that Senate committee. It is US who gave them the chance to build oligarchy. Because WE took them seriously and still are.
We are going to need a lot of Sandy’s at 900 hPa or lower to confront the situation. Better hope for them, they’d be the only chance to save some earth for our children.
On the “It is more likely than not that the warmest year was a different one than 2014” argument… assume we have uncertainty in measurement of +- 0.1 degrees. Assume that each year is, on average, warmer than the previous year by 0.05 degrees with some noise (say, +- 0.05 degrees). In 100 years, you’ll be 5 degrees warmer – but you’ll never have had an undisputably warm year! Got to love contrarian logic…
“With 2014 essentially tied with 2005 and 2010 for hottest year, this implies that there has been essentially no trend in warming over the past decade.”
Well, I can now see why Curry is so certain about her uncertainty (in the connect-the-three-dots sort of way), and that, via projection, other people should also be as certain about their uncertainty
Br’er Rabett’s response was lovely:
There’s an old saying: The more you pay, the more it’s worth. And the denialists have paid dearly for their obstinate refusal to recognize reality. They’ve paid in credibility. They’ve paid monetarily. Most important, they’ve paid in opportunity, since our chances of avoiding severe consequences are now virtually nil. They will never back down precisely because their sunk costs are so high. They’ll switch arguments (one of the most prominent I’ve seen of late is to use UAH or RSS to downplay warming). They’ll attack the messenger (watch how many times the names Jim Hansen or Michael Mann or even Al Gore come up in their posts). But they cannot be persuaded. They’ve achieved epistemic closure.
“Achieved,” Kimo Sabe? Methinks that’s where they started from.
Looks like deniers’ spin is based on error margins (Daily Mail, who else). We have to give them credit for not giving up.
I’ve spotted that article too (and even more bizarre article in daily telegraph, which is more the sort of it is all just one great conspiracy species). However, what interested me was to check, how 38% likelihood of year being the warmest among 135 years compares to the probability, that the measured warmest year is indeed warmest year. I’ve tried the following simulation: I’ve generated series of data, being distributed normally with sd = 0.07 (estimate for variability of temperature), selected the highest record. Then I’ve added a series of 135 errors with sd=0.05 (mean was 0 in both cases). I’ve did plenty of simulations and counted values, which were “true positives” of a record value.
My results are:
Years to compare (including the hottest), frequency(true positives)
2 , 0.802994
10 , 0.531606
20 , 0.459482
50 , 0.384784
100 , 0.339534
135 , 0.321936
200 , 0.2999
I am not sure, but my conclusions would go towards that 38% is actually not that small likelihood because the probability that none of other years will have small error would go toward 0.
I am somehow unsure about this, so I will appreciate any (well intended) comment. I hope I haven’t got the whole thing wrong.
One of the responsorial memes I’ve seen is “Sure, a record by 0.04C, when measurements are only accurate to 0.1C!”
In response to an RC poster who’d come across the same thing, I pointed out that that’s a conflation of measurement accuracy with precision of the overall result (and quite incorrect.)
But an elegant demonstration of the precision/accuracy/validity issues involved in this topic, in the manner of past ‘specialties of the house’ at Open Mind, would be just ducky–if the inspiration should strike and the time for execution of same happen to be available, of course.
I animated this for you. Hope you don’t mind.
I reposted the animated GIF on the Arctic Sea Ice Forum. Hope you both don’t mind.
Not much grip to be had on my angry post earlier today. Elaboration.
Today, I stooped down to blame the victim. The victim is esteemed climate scientist Michael Mann, coiner of ‘The Hockeystick’ thru a slip of the tongue during some telephone conversation, but actually the guy who made clear beyond doubt that the planet is warming fast and why it does.
I did not blame him for his groundbreaking work or his impeccable personal and scientific integrity. I blamed him for helping climate revisionism as of, say, 2005. I blamed him along with the group of other, equally impeccable climate scientists and a number of more or less professional bloggers in the field for same. Including, though implicitly, ‘Sou’* of Hotwhopper who does a gargantuan daily task of deconstructing climate misinformation as dessiminated by Anthony ‘Bully’ Watts on his Wattsupwiththat site. Including ‘Tamino’* of Open Mind who happens to be on the receiving end of my vile posting.
Tamino did an excellent post (as usual) on the record warm year 2014 emphasizing how this record simply fits the trend of warming. But his closing sentence caught my ressentiment about the climate ‘debate’. It reads: “They [climate revisionists, cRR] just don’t seem to realize that the real embarrassment to their precious “pause” is the trend.”
The problem is they, the climate revisionists**, are NOT embarassed. They know perfectly well that climate change is damned real. Their sole, ideologically motivated mission is to sow doubt among the public about climate change. They do this by carpet bombing media and the internet with rubbish articles and blogs. Their nonsense is easily dispelled by the scientists but usually not by the general public.
Some climate scientists have taken it upon themselves to meet this stream of disinformation by systematically debunking it on blogs like RealClimate, SkepticalScience, Klimaatverandering (Bart Verheggen), Hotwhopper, Open Mind et cetera.
These sites really do a great job informing public – from total laymen up to climate scientists themselves – about climate change, the certainties and the uncertainties. So far, so good. But the mistake they make time and again is to somehow portray the contrarians as ‘naive’ or ‘stupid’ or ‘unwitting’. This way these good sites suggest an innocence of climate change denial that is not there. Moreover and hugely aggravated by this, the cycle of shitty article – bona fide debunk can leave only one impression with the public: that the debate is not over, that climate change and its cause is still ‘just a hypothesis’, uncertain, under discussion et cetera.
I’ve seen and joined the ‘climate wars’ (part of Mann’s 2013 book title) for about ten years now including numerous examples of the cycle. Whenever some revisionist climate myth was dispelled and fubarred, some other shit would instantly enter the scene, next cycle, and one or two years later SAME myth would be spooned up again in some bogus ‘scientific’ article. These thugs will continue to do so from their coal-fueled airconditioned homes as the world gasps for breath during ever more devastating summers.
Debunks would always, perhaps from misguided politeness, be accompanied by those epithets rendering climate revisionism as ‘stupid’ or ‘naive’, thus systematically retaining a veneer of innocence around it.
This has to stop.
Call the thugs, systematically, what they are. Thugs. Paid liars. As of about 2007 this is absolutely slander no more, because the thugs, their methods and their pay streams have been exposed thoroughly – by the excellent bloggers who I am chastising now; and cRR used to be one of them (until about 2009 and forget ‘excellent’ in this case).
Break the cycle with the true authority of knowledge.
Two of the worst thugs made good promotion during past weeks.
Ted Cruz was appointed chairman of the Senate subcommittee on Space, Science, and Competitiveness.
James Inhofe was appointed chairman of the Senate Environment and Public Works Committee.
Voters’ doing. The USA will need a lot of Sandy’s and California firmly in the IDZ to get the message, but by then it will be too late as ever more of the world’s societies run into tatters because of the heat.
Michael Mann got an inquisitional questionnaire from then chairman of the House Energy and Commerce Committee Joe Barton in 2005 and was barraged by him and fellow oil- and coal lobby liars in the years since. Mann’s responses were elaborate, polite and clear. I now contend the only correct response would have been ‘Fuck off!’. And let him bring it on, thru the court if he must: he wouldn’t have stood a chance.
It is said Joe Barton got politically damaged for his unadulterated terror against science in general and climate change in particular, but a look at the fact he is now in his 31st year of unbroken Congressional membership should teach otherwise. Even after his ‘scientific demise’ in 2005 he won the seat with a 23 point margin during elections of 2006.
(as per ‘Roderiko’ who abuses his FB page for a blog).
Tamino, it would be interesting to know how hot 2014 would have needed to be in order to for there to be 100% certainty that it was the new record. And did the extreme outlier year of 1998 even reach 100%?
How about 95%.
Not that it matters,but there is a very real possibility the 12 months March 2014 through February 2015 will smash the 2014 record.
You have to make some assumptions: Construct a distribution w/ trend 1x and error around each point 10 (which is on the order of what you find in the various temp distributions). Start at year 1.
Define a 2 sigma local max X=30 at year 10. In year 11 the expected value 54% of the time will be 31 +/- 20. In year 12 it will happen 58% of the time. And so on. You easily get a significant trend without EVER seeing single local maxes/nearby points that are significantly different from ALL nearby local maxes/points. This R code illustrates that general point:
Points <- rnorm(50,0,10)
Year <- 1:50
Trend <- Year+Points
fit <- lm(Trend ~ Year)
That’s “_30_+/- 20” not 31.
Looks like we both had the same idea; I made a very similar figure over the weekend for Yale Climate Connections:
So no pause against a linear trend. But what if the underlying trend is not linear but rather accelerating? MIght the recent slow down then count as ‘something’? A pause would probably not be quite the right word, but rather a failure to accelerate – certainly not a catchy phrase as far as propaganda goes. Some studies have found real physical mechanisms such as volcanic aerosol and trade wind changes that could explain a slower than expected warming rate. Presumably at some stage in the future these factors will reverse and warming will speed up again. If they are just explaining the fact 2007 to 2014 are generally lower than the trend by an amount that is nothing more than noise then the speed up will likely also be by an amount that is nothing more than noise. But if the slow down is measured as something significant against a steeper accelerating trend than a possible speed up in the future becomes more alarming.
To me if the pause is real that is alarming, and if it is not then that is comforting in that we are facing a continuation of the current trend and not a speed up.
Reblogged this on mt's Science Blog.
Leaving the spin aside, to me the most important information is n o t whether or not last year was the hottest, but that all years fit neatly into the distribution zone between the dotted lines, and that the pause is actually a perceptional effect coming from the 1998 outlier. Cover it with your thumb and the perception of the pause will much less compelling!
I too fell into a similar trap when looking at the arctic sea ice volume data from PIOMAS. For a while, those looked like hitting zero in 6 years or so, but that too was over-emphasizing too short a timescale. Now they are pretty much on the linear trend again.
Perhaps worth mentioning: we talk about all-time highs but when was the last all-time low (for the NASA series starting in 1880) set?
I’ll tell you: 1909, over a century ago.
Since then, 18 new highs have been set.
What are the odds of that happening in the absence of a trend?
In the absence of a trend, the next all-time high OR low will be expected to be a high half the time and a low half the time. Therefore 18 highs in a row, in the absence of a trend, is 1 in 2^18, or 1 in 262144.
kap55, that’s true only in the absence of autocorrelation.
Another interesting factoid: we have had a top-10 warmest year every year for the past 29 years in a row, back to 1986. The last time we had a top-10 coldest year was 1917.
withdrawals are over, Tamino is back
Reblogged this on A few words from Nintinup … and commented:
Another reason why I am upset at fools who can’t accept a trend …
I have also taken the liberty of posting Rob’s animated GIF on Twitter, amongst other places:
Outstanding post. Scary that Cruz will have so much authority but only parrot words (and not actual science and understanding) to make decisions. But what will the pseudoscience anything-but-CO2 crowd drone on about in a few years when the linear trend is broken and curves to the upside?
It probably won’t be much longer until the warming since the beginning of 1999 is statistically significant on its own. If we estimate a GIStemp anomaly of 0.70 deg C for 2015 (0.02 deg C above that for 2014) then the tend for 1999-2016 will be 0.119 deg C/decade which I expect will be higher than 2 standard deviations. Hopefully this will make the denialists squirm.
An uptick in the decadal average trend will be initially be pinned on the IPO going positive, as it will in the next decade or so. It will take the following negative IPO to see that in fact the linear trend is curving upward. Of course another Pinatubo or two going off will add further distraction, but by the 2030’s it should be solidly established by actual data that a 3C ECS for CO2 at 560 ppm remains a very solid estimate, if not a bit low.
According to the SkS trend calculator, the HadCRUT4/UAH hybrid trend (that’s Cowtan & Way) is already statistically significant at 0.144C/decade for 1999-present.
And that includes autocorrelation.
In the annual tests I gave near the top of the comments section here using GISS data aggregated annually from Jan-Dec (column 14 in the GISS datafile) there is no evidence of autocorrelation as measured by the standard R DurbinWatson test (from package car) in any of them.
1997, 1998, 1999, and 2000 all show significant trends though they do flatten a bit at the shorter time intervals.
It is the accelerated curve upward that will likely become apparent in the next few decades:
You (Tamino) are the only one I can think of (and that includes many scientists) who has consistently taken the scientific approach to the so-called “pause”: demanding statistical evidence at the standard level.
The “pause” has never been properly defined, not to any scientific standard, at least. That leaves it open for different interpretations by different people, which makes it impossible to refute.
Many scientists allowed themselves to be suckered into this and have spent the last few years trying to explain their way out of it — very unsuccessfully, I would say.
Dutch KNMI reported 2014 was substantially the warmest year since measurements began over 300 years ago, not anything tying with 2005or 2010! Like the US being just 2% of the globes surface, now denialists will argue the Netherlands is only a fraction of the world, it suiting their position this time around.
Indeed, Holland obliterated the old record (which dated way back in 2006).
Tiny country. But we can lump some countries which did the same thing together, e.g. Norway, Denmark, Poland and some others (I am truly losing track..). Also, the continent of Europe as a whole: http://cib.knmi.nl/mediawiki/index.php/2014_warmest_year_on_record_in_Europe
I was waiting for this post from Tamino, and in anticipation of it I made a pair similar charts. However, as in Tamino’s post from last year, I restricted the data to the satellite period rather than the 30 year period commencing in 1970. However, I calculated the trend to the end of 1998, in order to bias it upward the way a denier would typically do it.
I also tried it with the UAH values – aren’t they the darlings of the denial crowd? – with curious results.
(In case the links don’t work as expected, you may need to use the magnifying glass tool to make these visible on the photobucket page.)
“It’s apparent that we’re already warming so rapidly that more than one year in three should be a new record, on average.” … The frequency of records.
They’d best get used to it…
Would a decent analogy to help people understand the relative probabilities for each year be to US primaries? The candidate with the greatest percentage of the “vote” wins, and 2014 got the greatest percentage.
The warmest year is assumed to be one of the years from 1880 to present, so the probability that it is one of those years is 100%. The 5 warmest years are 2014 (48%), 2010 (18%), 2005 (13%), 2013 (6%), and 1998 (5%). So the probability that the warmest year is one of those 5 is 90%. If 2014 is really not the warmest year, then the probability that it is one of the other 4 is 42%.
There is only a 10% probability that the warmest year is not one of the 5 listed years, and that 10% is distributed over all the years from 1880 to present, although I guess the probability for most of those years is 0. So when the doubtalists say “Ha! It is more probable the 2014 is not the warmest year ever,” (52 vs 48) all that really means is there is a chance that one of the other 4 years listed in the table could have been warmer than 2014. They make it sound like 2014 might not have been a warmest year at all, but it is 90% certain that it is one of the 5 warmest years and 99.2% certain that it is one of the 10 warmest years:
Now being covered at RC: http://www.realclimate.org/index.php/archives/2015/01/thoughts-on-2014-and-ongoing-temperature-trends/comment-page-1/#comment-623770
Tamino, perfect post, as usual.
Looking at the data and plot from the previous post, I just wonder if we should expect next really big record in few closest years?
In the last seven years (with two records in 2010 and 2014) we have had only one year above the trend and six years below trend, with four years roughly sitting at one standard deviation below trend. While this can happen due to fluctuations, nevertheless if this happens few more times it would be evidence of the change in the trend.
Question is how many more years without relatively large deviation (say one sigma) above the trend we can have before the trend itself should be modified?
Followup to previous question and a little extension.
The linear trend is the best explanation of data now. Denialists love to perform mathturbations with cycles that would explain everything and its bullshit. Basic physics tells us that we should expect linear trend and we observe linear trend. Nevertheless, it is still possible that some multidecadal oscillation is real. The plausible physical mechanism is redistribution of heat between northern (land and ocean) and southern (mostly ocean) hemisphere. Assuming that this is real, and that the full cycle is about 60 years, and that minimum was around 1970, we probably could already estimate bounds on the size of this contribution. It certainly cannot be large – with these assumptions we are in the phase with fastest decrease of temperature due to cycle and yet we have just achieved the record high temperature. Nevertheless, if the cycle is real then the in the next fifteen years we may expect slow growth or even “pause”.
Playing with excell graphs – certainly not best tool around – I have checked that model that assumes linear trend at 0.015 K/year starting at 1970 looks reasonably well when 60 years cycle with amplitude 0.1 (0.2 difference between max an min) is added.
With amplitude increased to 0.15 the trend has to be decreased to about 0.012 K/year and fit to data is still not bad, but this is as far as you can go.
When I put amplitude of the cycle below 0.05K the change in future trend was minimal, but with amplitude at 0.1 the differences for the next few years are significant, and with aplitude 0.15 they are really large. With the last value we could expect more or less stable temperatures for the next 15 years or so.
So – to sum up – the linear trend is currently the best explanation of data, but in the next two or three years we will see, whether it should be augmented – for example with a 60-years cycle.
“Basic physics tells us that we should expect linear trend and we observe linear trend.”
This is most decidedly not true. The evolution of many natural processes, including global temperatures, glacial ice mass loss, sea level rise, etc. are usually better “fit” to some nonlinear polynomial curve, if you’re into that sort of thing. A linear fit is really about the worst and most inaccurate over the long run, and probably does the most to hide important underlying dynamics. Monckton’s recent “simple” model, which is totally linear (and total wrong) is a perfect example of the absurdity of fitting linear tends in dynamic and evolving climate processes.
Relating back to global temperatures, we can fit a linear trend to increasing temperatures of the 20th and early 21st centuries because we are in the earliest stages of a process of change which is very likely set to accelerate in the coming decades (i.e. the curve upward), and thus, any linear trend lines extended now will miss this curve and be well off the mark by 2100.
pohjois says “Denialists love to perform mathturbations with cycles”, then proceeds on to perform mathurbation with a cycle…
OT–but relevant to past posts on SLR. Stumbled across this paper this morning, and was reminded to Our Host’s methods. Thought it would be of interest to many:
Reblogged this on In Search of Truth.
Convincing graph, Tamino. I agree all the discussion about the pause is leaving the proponents as hostages to fortune.
One point, you discuss the trend without saying what it is. My own figures from GISS suggest the 1975 – 2014 (inclusive) trend has been around +0.16c a decade. IIRC IPCC AR5 is projecting around +0.26c a decade (middle point of +0.11c to +0.41c). Is that a fair summary?
That is a fairly substantial difference. A 1.6c 1970-2070 current trend warming would probably be net beneficial overall and indicative of lower than expected equilibrium sensitivity, whereas the 2.6c IPCC projection is verging on dangerous.
Whether it is one or the other has major implications for environmental policy.
Is the IPCC wrong or do you think we are likely to see an acceleration in warming between now and 2070?
Pohjois, there are so many different claims about something that might cause anything in the general range of a “60 year cycle” that no short term observation is going to establish anything conclusive. Have you looked at the claims being made along those lines?
In a recent issue of this journal, Loehle (2014) presents a “minimal model” for estimating climate sensitivity, identical to that previously published by Loehle and Scafetta (2011). The novelty in the more recent paper lies in the straightforward calculation of an estimate of transient climate response based on the model and an estimate of equilibrium climate sensitivity derived therefrom, via a flawed methodology. We demonstrate that the Loehle and Scafetta model systematically underestimates the transient climate response, due to a number of unsupportable assumptions regarding the climate system. Once the flaws in Loehle and Scafetta’s model are addressed, the estimates of transient climate response and equilibrium climate sensitivity derived from the model are entirely consistent with those obtained from general circulation models, and indeed exclude the possibility of low climate sensitivity, directly contradicting the principal conclusion drawn by Loehle. Further, we present an even more parsimonious model for estimating climate sensitivity. Our model is based on observed changes in radiative forcings, and is therefore constrained by physics, unlike the Loehle model, which is little more than a curve-fitting exercise.
“… existence of a 60 year cycle in the calibration period is implausible. This result demonstrates that the standard LS11 model is inappropriate for use in attribution as a key modelling assumption is clearly invalid.”
very far afield from the topic, a bit from one of my favorite lord-I-wish-I-understood-this-stuff blogs:
“My worry is that this non-linear relationship is sufficient to provoke the arch effect, and that the large amount of inertia explained by sea ice is just the inertia of the arch effect rather than a real ecological effect.
It’s not easy possible to test this hypothesis with real data, but it is easy to test it with simulated data. …”
In a curious moment I tried out Tamino’s idea using the Cowtan & Way HadCRUT4/UAH hybrid temperature data, arguably the best set, which was just updated today. Although 2014 is not a “record” year for the CW index, it nevertheless stands well above the projection of a 1979 (start of the UAH record) to 1999 trend line.