Many different factors affect Global temperature. Fake “skeptics” like to claim that mainstream climate scientists ignore everything but greenhouse gases like CO2, when in fact it’s mainstream climate scientists who identified those other influences. Natural factors cause temperature fluctuations which make the man-made global warming signal less clear, fluctuations which are often exploited by fake skeptics to suggest that global warming has paused, or slowed down, or isn’t happening at all. A new paper by Foster & Rahmstorf accounts for some of those other factors, and by removing their influence from the temperature record makes the progress of global warming much more clear.
The paper studies the five most often-used global temperature records. Three of them are surface temperature estimates, from NASA GISS, HadCRU, and NCDC, the other two are satellite-based lower-atmosphere estimates from RSS and UAH. These are compared to three factors which are known to affect climate: the el Nino southern oscillation, atmospheric aerosols (mostly from volcanic eruptions), and variations in the output of the sun. The time span studied was from January 1979 through December 2010, for which all five data sets have complete coverage.
The impact of el Nino is characterized by the Multivariate el Nino index (MEI), that of volcanic aerosols by Aerosol Optical Depth (AOD), and solar output by Total Solar Irradiance (TSI).
Their influence was estimated by multiple regression. In addition to these natural influences the regression also included a linear trend in time, allowing a simultaneous estimate of the rate of global warming as well as the impact of these other factors. Since the natural influences can have a delayed effect on temperature, the regression allowed for a lag between the value of any of the three factors and its impact. Once the effect of the three known factors was estimated, it could be removed from the temperature data to create adjusted temperature data which is mostly (but not completely!) free of their influence.
The raw data — before the natural fluctuations are removed — look like this:
All five records show similar changes, including an upward trend over the 32 years studied. They also show large fluctuations, more so for the satellite data than the surface data. This has spurred numerous false claims of silly things like “global warming stopped in 1998” (due to the large spike from the powerful el Nino of that year). Large fluctuations also make it more difficult to establish the statistical significance of trend, leading to meaningless statements about “no statistically significant warming for 15 years” (or 10 years, or 7, or since last Thursday).
After the natural influences are removed, the adjusted data look like this:
With the bulk of the fluctuations removed, the continued course of warming over the entire time span (including the last decade) is undeniable. It’s worth noting that in all five adjusted data sets, the last two years (2009 and 2010) are the two hottest.
With much of the natural fluctuation removed, it’s possible to compute trends more precisely. Hence it’s interesting to consider whether the trend due to global warming has changed during this interval. To that end, trend rates were estimated (along with uncertainties) for a variety of time intervals, starting with all years from 1979 through 2005 and ending with 2010 (error bars are plus-or-minus 2 standard errors):
None of the data sets shows any evidence that the global warming rate has changed recently. A truly fascinating result is that increased precision enables us to establish the statistical significance of a warming trend using a shorter time span than with unadjusted data. All five data sets show statistically significant warming since 2000.
Another interesting result is that el Nino and volcanic aerosols have a stronger influence on lower-atmosphere temperature (from satellite measurements) than on surface temperature (the plot shows the negative of the coefficient for aerosols, so that for all three factors higher values indicate stronger influence, with black dots for global temperature, red for the northern hemisphere, and blue for the southern hemisphere):
That’s one of the reasons that the satellite data show more natural variation than surface data, as well as greater uncertainty in trend estimates when the known factors are not removed. After removing the influence of known factors, uncertainty levels in trend estimates using surface and satellite data are comparable (again, black dots are for the globe, red for the northern hemisphere, blue for the southern hemisphere):
We can even average the five adjusted data sets, giving this:
That shows, with great clarity and impact, the real global warming signal.
And, it should put an end to real skeptics claiming that global warming has recently stopped or slowed down, because real skeptics base their beliefs on evidence. I don’t expect it will have much effect on the behavior of fake skeptics.
I was certain you will be the first to comment Tamino, and you did not dissapoint. As always great analysis!!
Sokkinno and Tamino, I agree.
did not dissapoint, always great.
Once again, a clear exposition of some interesting work. Thanks!
Sadly, the last two sentences are also spot-on.
However, with the solar cycle rising, the weaker ‘double-dip’ La Nina projected to end sometime this spring and China continuing to work on air-quality measures which should decrease aerosols a bit, I think that skeptics with just a bit of ‘real’ about them will find it harder to sustain their orthodoxy over the next few years.
Excellent work, well done. I recently did a quick calculation of the decline in incident solar irradiance associated with the unusually deep and long-lasting solar minimum we’ve just had, and reckoned that it came to -0.25W/m², whilst over the same period the CO₂ forcing was +0.26W/m²… and I didn’t have to fiddle with these figures, it just came out that way. To me that illustrates how effectively the rising atmospheric CO₂ has offset what should otherwise have been a measurable cooling influence for the last decade or so.
Here at AGU, Mike Mann pointed out the recent paper by Knutti et.al. attributing 74% of the recent warming to us. Mann’s claim is that it’s actually more than 100%, because our actions have overwhelmed the natural signal, which would be cooling… He made a good point.
I have no idea of the significance, but from the record –
– There should be a general cooling for the past 6000 years. Indeed, comparison with other interglacials suggests that without human influence it would have been significantly cooler prior to the modern period anyway.
– And in the modern period, temperatures should have peaked circa 1940-50 and declined slightly since then.
Of course, the proponents of ‘natural variation’ seem to have a problem with all this. Can’t think why.
The Knutti paper claims that at least 74% of warming is due to human influence. The deniers have touted that as 74%, but that is not correct.
This shortens the time needed to evaluate changes in the trend significantly. Assuming warming suddenly stopped, how long would it take to see a two-sigma effect on the five combined datasets? How long for some realistic effect of lowered CO2 emissions? Roughly a decade for the latter?
Nice analysis. Shows what can happen when a field draws people in with various expertises.
This is one of the most significant papers on the subject in years I feel. The reason the “sceptics” are backlashing against this is they realize the danger this presents to their short-term noise exploiting strategies.
The “sceptics” want to keep the noise in the records (so long as that noise benefits them – you bet once gT starts rising fast when the noise reverses they will start drawing attention to the noise)
If CRU or NASA maintained a monthly updated adjusted record like this which was regarded as the proper global temperature signal, then “sceptics” would be stripped of the ability to exploit noise in the records as they do.
I’ve always felt a regularly updated ENSO-adjusted version of the GISTEMP and CRU records at the very least should be produced for public consumption so that the media and bloggers can no longer get away with exploiting La Ninas (“half the global warming of the last 30 years has disapeared” kind of rubbish)
Let me see if I got it right:
1- You fit every series with a linear model of the form A * ENSO + B * Aerosols + C * TSI + D * time + residuals. The A, B, C and D are independently estimated for each series.
2- The resulting plot (2nd graph in this post) is simply D * time + residuals, for each series.
Assuming I got it kind of right, is there any condition imposed on the residuals (e.g. zero-mean, normality, etc.) ?
[Response: No condition is imposed on the residuals. However, the regression also includes a constant term (the “intercept” rather than the “slope”) so the residuals will show zero mean.]
I’m curious as to why time is included in the original model. Why not just exclude time and look at the residuals? When I look at the effect of MEI (without AOD and TSI), I get a very different result if I look at the residuals of a model with MEI alone versus if I include time and calculate the adjusted data as B*time + residuals.
Something tells me there is a very simple answer, but I can’t fully wrap my head around it. Any help would be appreciated.
[Response: When you regress against MEI (and nothing else), notice two things. First, the fit is not statistically significant at all (taking autocorrelation into account). Second, the *sign* is wrong. That tells you that leaving out the linear trend causes its variance (which dominates the variance of the temperature signal) to overwhelm the influence of MEI.
Perhaps I’ll do a post about this.]
Thanks – I’m getting there. A post sometime would indeed be useful.
Could you provide a reference (hopefully R code) demonstrating how you carried out your autocorrelation estimates?
[Response: There’s a built-in function in R called “acf”.]
What would figure A.1 look like if the ARMA(1,1) curve was plotted as well as the AR(1) curve?
So, if Phil Jones had had these results when he was asked by the BBC in February 2010 the famous question,
“Do you agree that from 1995 to the present there has been no statistically-significant global warming”,
he would have been able to answer,
“No, I do not agree. I refer you to the Rahmstorf and Foster paper that shows that indeed there HAS been statistically significant warming since then”
Of course, that being the case, the question would never have been asked in the first place.
You have focused on the 1979 onwards for the reasons you have given but I assume it is possible to apply this approach to earlier temperature data.
It you compare that filtered data to human produced aerosols and GHGs, could you determine how much of the warming could be attributed to GHGs? And perhaps get some estimate of climate sensitivity?
[Response: The present method includes a linear time trend to represent global warming, which is valid because the global warming impact since 1979 is approximately linear. But if you extent the analysis before about 1975, the global warming trend is no longer approximately linear. In that case you should use climate forcing rather than a linear trend to represent the man-made influence.
Which is exactly what was done by Lean & Rind (see references in the Foster & Rahmstorf paper).]
Now that’s what I call a good offense! Very nice!
I still have an annoying problem with using something defined by temperature (ENSO) as a factor to be removed when estimating the trend in temperature. Hopefully that’s not what the fake skeptics are complaining about (because that would make me feel dirty).
[Response: You don’t have to use temperature to define ENSO. You can use the southern oscillation index SOI instead (which is based on pressure rather than temperatue), and in fact the paper tested using SOI and it didn’t change the results. And MEI isn’t exclusively temperature either, it’s a combination of a *lot* of things.]
I would really like to see the influence of these three factors (especially the TSU) removed from the global temperature graph going back to 1900, or even to the LIA. Removing the dramatic rise in solar activity (seen here http://upload.wikimedia.org/wikipedia/commons/6/60/Solar_Activity_Proxies.png) would have a dramatic effect on this graph http://en.wikipedia.org/wiki/File:Instrumental_Temperature_Record_(NASA).svg
[Response: Check out the papers by Lean & Rind referenced in this paper.]
Possibly 2011 will even top the previous two years, being the warmest La Niña year on record.
Great article and a good summary here. Thanks a lot.
In your previous comparable analysis you ended up with a combined warming rate of 0.17 degrees per decade:
Do you have the slope of the last figure here (figure 8 in the article)?
[Response: The previous analysis used different data sets, including for temperature, because RSS released a revised data set in the interim which shows slightly less warming. It also used volcanic forcing from Ammann et al. for aerosols rather than AOD from Sato et al., and used sunspot counts as a proxy for solar activity rather than TSI.
The trend of the 5-data-set average (in figure 8 of the article) is 0.16 deg.C/decade.]
Too hasty as usual. You could leave out the erroneous “you gave”.
What is interesting to me is the apparent recent decline in the rate for the land sources with simultaneous increase in the rate for the satellite measurements.
[Response: Did you not look at the error bars? There is ZERO evidence of any recent decline in the rate, for ANY of the data sets. Period. But if you’d rather believe in temperature trends over 6-year time spans, I certainly understand why.]
I did see the error bars (after all they are what the lines are based upon).
Perhaps I’ve been reading graphs like this incorrectly all along. I had thought that while it was possible that the “real” rate could be at either extreme, the dot in the middle of the error bar indicated that the “real” rate would most likely be found around there. Is that not correct?
If that is not correct, than I ( a non-statistician – could you tell?) have to wonder what the point of the lines and dots are.
[Response: If measurement 1 indicates you’re at a particular Starbuck’s in downtown Seattle, then measurement 2 says you’re somewhere in the Washington-Oregon area, you’d be mistaken to conclude that there’s evidence of having moved.]
I note that the 1998 El Nino peak is still apparent, though diminished, in the adjusted data. Does this imply that not all of the ENSO effects are being removed during the adjustments? (If not, is there any other explanation of the remaining peak?)
If the former is the case, then could the “recent decline in the rate for the land sources with simultaneous increase in the rate for the satellite measurements” that KenM noticed simply be the result of the trend lines ending in 2010, which was an El Nino year. If there is residual El Nino influence still present, to which the satellite data are more sensitive, that would tend to produce the observed effect.
The bottom line, of course, is that any such effect, based upon data from such a short time period, has no significance with respect to the long-term trend.
[Response: No significance period. And KenM only thinks he noticed a recent decline in the rate for surface vs satellite measurements — there’s zero evidence of any such physical event.
I would say that the 1998 el Nino had a stronger-than average temperature effect, *in addition* to being a stronger-than-average el Nino. So by removing the *average* effect of el Nino, the model doesn’t get all of the 1998 event out of the temperature record.]
I think I’ve almost got it. Please allow me one more question. If the second measurement puts me centered over Portland, OR, and the error bars just barely include the Starbuck’s in Seattle, is it fair to say:
“I can’t say with certainty that I have moved, but more likely than not I have, and towards Portland”?
I’m not sure I want to contribute to discussions about Starbucks in Seattle, but with reference to Tamino’s “No significance, period”:
I would say, “Indeed”.
And I would add, that were Tamino to update the calculation of the Rate(C/decade) lines in five years time, then I would expect the present slight wiggles at the ends of the lines to 2010 would have disappeared, to be replaced by a continuation of the previous smoother lines.
But, here’s the rub, more slight wiggles would appear at the ends of the new lines, and someone else will be wondering what they signify. And perhaps wondering also why their innocent enquiry meets with exasperation.
I think for non-statisticians it’s best just to remember that ends of lines often have wiggles. After all, that’s how fishermen catch fish.
Thanks Slioch. It is difficult for me to reconcile the concepts like “the dot represents the best estimate of the trend” and “we can assign no confidence to the trend equaling the dot.” To this layman, “best” and “no confidence” seem to be a contradiction in terms. I think if I just stick with “the dots don’t mean anything at all” then I’ll have the right idea.
[Response: The dot represents the best single estimate based on the available data. But the error bars represent a better evaluation — they give an idea of the precision of the estimate. You could well say “I’m in or near Seattle” because that’s the best single location estimate, but if we only know your location within several hundred miles then “I’m probably in the state of Washington” is a much more realistic portrayal, whereas declaring “I’m probably in Seattle” is stretching a point way beyond reasonable limits.]
As another example of how ‘the ends of lines often have wiggles’, as I rather flippantly put it, take a look at the graph of HADCRUT3, shown here:
This shows annual global average temperature data (blue and red bars), and the thick black line is the smoothed value calculated from that data.
Calculating the thick black line is done by means of what is called a “21 point binomial filter”: the black line value for any particular year (Year X) is calculated using the value for the annual temperature (red or blue lines) for that year and for ten years either side (hence 21 years in all). So, for example, the black line value for 1990 is calculated using annual data from all years from 1980 to 2000). But it isn’t just a simple average – annual values are weighted according to how close they are to Year X and they use the coefficients (summed to one) from the twentieth row of Pascal’s triangle to apply the weighting. See:
(the twentieth row is actually the 21st since the first row is labelled 0 ).
That all sounds fine and dandy, until you realise that it is therefore impossible, at present, to calculate the true black line value of any year after 2000 – because we don’t, of course, yet have any annual data for any of the required years after 2010. So, how is it that the black line extends to 2010 in the above graph???? Well, it might have been better for CRU to simply have stopped the black line ten year before the present, currently at 2000, and left it at that. Though then, no doubt, they would have been accused of “hiding the black line”. So, CRU decided, when calculating the black line for years less than ten years ago, to simply use the value from the last year for which annual data was available AS IF all subsequent years had the same value. This has three consequences:
Firstly, each of the black line values for the last ten years is PROVISIONAL and WILL (almost certainly) change as the years roll on and more data becomes available.
Secondly, the last bit of the black line becomes unduly sensitive to the last year’s value – hence the slope of the line is likely to change – it WIGGLES at the end.
Thirdly, in a time of warming temperatures, most of the next ten years will actually be warmer than the current last year. Thus the end of the black line is currently depressed compared to what it will eventually be. This was very evident when the cooler year, 2008, was the last year used and the black line actually headed downwards, an event that got the AGW deniers very excited with tales of impending ice ages and suchlike nonsense.
So, the ends of such lines wiggle, and the closer you get to the present, the more they wiggle. But the wiggles have no significance: they are the consequence of insufficient data. Had Tamino continued his lines to 2009 (which he could have done) his lines would have wiggled far more. Had he stopped them much earlier than 2005 we wouldn’t be having this conversation, but probably someone else would have accused him of hiding something. You can’t win. Hence the exasperation.
Is it a good thing that some readers might think Tamino has no connection with the paper discussed?
[Response: I am the principal author. I thought I had been “outed” so many times that everybody knew by now. Apparently you do.]
Tamino, in response to comments at New Scoenrist, I posted this(edited for clarity):
Kevin C and to a lesser extent CG, see Tamino’s post where in methodology he states:
“The impact of el Nino is characterized by the Multivariate el Nino index (MEI), that of volcanic aerosols by Aerosol Optical Depth (AOD), and solar output by Total Solar Irradiance (TSI).”
Interestingly that suggests to me that neither Chinese aerosols or deep ocean heating (which may correlate with eNSO) is not excluded as a contributor to short term trend changes, since MEI and AOD possibly relate to aerosols and to deep ocean heating…
Is that close to correct in your view? I couldn’t see that non volcanic aerosols were excluded in AOD..
You find that by removing exogenous influences, you can establish statistically significant warming using much shorter spans than with raw data. Do you have to somehow take the uncertainty in the estimates of those exogenous influences into account when determining the probable error of the global warming trend estimate? Or can you simply calculate the standard error of the “residual” warming (and correct for auto-correlation) as you would do if it were the raw temperature data? (Am I making sense at all?)
[Response: Just use the residuals to define the uncertainty of the trend estimate.]
Very interesting, and very confirming of recent sensitivity estimates. First of all I was impressed by the very close agreement in warming rate for the three surface datasets (.17 C/decade). Over the 31 years of the study, that means .17 x 3.1=.527 C of warming.
Over the same period (1979-2010) CO2 in the air has risen from 336.78 ppmv to 389.78, or 53 ppmv. In other words, 10 ppmv = .1 C of warming. Doubling (an increase of 280 ppmv) would then imply 2.8 C of warming, exactly the same figure derived by Royer et. al. from his paleo studies going back 500 million years.
It looks like an impressive agreement, but I’m afraid it’s coincidental. Firstly, remember that over 30 years you’re going to get an estimate of the transient climate sensitivity, whereas IIRC Royer’s work looks at long-term sensitivity. Secondly, CO2 forcing is logarithmic, so it’s not valid to linearly extrapolate in the way you have. Doubling from 336.78 ppmv requires an incerase of 336.78 ppmv, not 280, and each doubling causes the same amount of warming, to a reasonable approximation. So the correct approach is to compute log2(389.78/336.78), which is roughly 0.21, putting us about one fifth of the way to a doubling over the period in question. That yields an estimated transient sensitivity in the region of 2.5 K per doubling of CO2, most likely corresponding to a Charney sensitivity significantly higher than Royer’s.
Tamino – Are the “raw data” of the five indices really as separated as shown on the first graph. I’ve never seen any other comparison graph where the 1980 anomaly values ranged from ~ -.5 t0 +.1, at least not that I can remember.
[Response: No. Each series is offset from the others for greater clarity. If they matched up as well as the adjusted data, that wouldn’t be necessary.]
I have attached a graph with the offsets removed. That allows the same scale to be used as is used in the “Adjusted Data” chart. Having the same same scale on bothg charts makes it much easier for me to compare the raw and adjusted charts. Here is the link.
So many adjustments being made over 160 years. Adjustments of similar magnitude to the anomoly, and all within 0.6 degC. Huge numbers of weather station changing over that period. Interpolations etc. It worries me.
[Response: This worries me. So does this.]
You want to worry? Run this animation: http://svs.gsfc.nasa.gov/vis/a000000/a003800/a003817/
Tamino: I don’t suppose this analysis was even a tiny bit inspired by the McLean et al 2009 paper that kind of did the opposite in every respect? (Subtracted the global warming signal then claimed ENSO and volcanoes were all that’s left – and stretched that to all that actually exists?)
One more thing: Over here in COP-out-17 land, I’ve used this in what I hope is a clear presentation to the uninitiated. Comments and corrections there welcome.
Why does it worry you?
Do you not understand the difference between valid and invalid “adjustments”?
Do you not understand how different sources of data are calibrated?
Do you not understand the reason why frequent repetitions of measurement permit the statistical identification of seemingly small differences?
Do you not understand anything of the large and detailed field of statistically-based data analysis that professionals use to make sense of what humans are doing to the planet’s climate?
The answer to the last four questions is most obviously and assuredly “no”, or you would not have been ‘worried’ in the first place.
Nevermind, looked at the paper and saw that you didn’t baseline them to a common period.
What effect will the discovery that the Solar constant is smaller than previously thought have on any of this? Obviously it doesn’t change the history of _changes_ in TSI, but Judith Lean and colleagues are now saying the TIM instrument on SORCE gives the last minimum as 1360.8 W/m^2, not 1365.4. The average is probably 1361.6 rather than 1366.1. I’m waiting for the deniers to try to use this somehow.
‘All five data sets show statistically significant warming since 2000.’
Does this statement correspond to figure 6? If so I’m confused by it. It looks like the trend for that period is down for surface measurements and up for satellite. What am I missing? Thanks.
[Response: The x-axis goes up to 2005. Look at 2000, and note the range of the error bars.]
From 2000 on (following the x axis) it appears that two standard errors grows to encompass most of the y axis. Doesn’t that imply less confidence about the true value and wouldn’t a given sample tend to cluster about the mean? I assume here that the solid line in the graph is the mean. So maybe that’s my mistake. Thanks.
[Response: I find your terminology confusing. I have no idea what you mean by “cluster about the mean.” What do you mean by “the solid line”? Are you referring to the line connecting the dots which have error bars extending above and below them?
The true value is what it is. The dots indicate the best estimate for each possible starting time. The error bars indicate roughly the 95% confidence interval. Perhaps the best interpretation of the graph is that for each starting time, the trend since then is “somewhere between the limits of the error bars” — but it’s not possible to assign it to the central estimate (the dot) with any confidence.
For later start times (shorter time spans) the uncertainty gets larger so the estimates will likely be *more* different from the true value.]
The lower limit of the error bars stay above zero, therefore warming. The graph shows the rate of warming, not the temperature.
Thanks, but that doesn’t appear to be true for any of the data sets (although it’s hard to tell because of overlap). The lower limit of the error bar for each data set is below 0 for the final observation in each set. If that is true how can we say there is statistically significant warming? The only place I see warming on figure 6 is in the true values for the satellite data (which the error bars tell me I may be confident int). But I get the feeling I’m missing something obvious with the surface temps.
Any chance of seeing this graph presented more clearly without the overlap of the error bars?
Not trolling. Just confused.
Okay, I stared at it until it made (more) sense. Figure 6 shows that warming occurs (where true value > 0) but more in some years than others, yes? So some warming occurs – sometimes more sometimes less. I was confused by – mentally – imposing a trend line which has a negative slope for the surface data but a positive slope for the satellite. Q.E.D – warming did not stop in 1998 as is sometimes claimed. Well and good I think – and please correct me if I’m wrong.
I’m still confused as to why the range of the error bars increases over time. Why is that?
[Response: Each point represents the trend estimate from that particular starting time until the end of 2010. Later start times correspond to shorter time spans, therefore the uncertainty in the trend estimate grows larger.]
The final observation in each set represents the trend from 2005 to 2010. It’s hardly a surprise that such a short timespan doesn’t give statistically significant warming. It also doesn’t give a statistically significant difference from the earlier rate of warming.
However, none of the observations representing trends from 2000 to 2010 contain zero, and so all five datasets show statistically significant warming over that period, as Tamino said.
Thanks to Tamino, @JollyJoker, and @MartinM for their perserverance.
One last thing to wrap this up in a neat, tidy bow…
@MartinM and @JollyJoker point out that none of the error bars since 2000 (or 2005) contain zero thus showing statistically significant warming. I agree that would be the case but it looks to me like the graph _does_ show that some of the later in series (where there is less confidence) error bars do in fact contain zero. The graph is kind of small with a lot of information crammed into a small space so its hard to tell/read at that end of the x axis.
[Response: Indeed, many post-2000 values do not establish statistically significant warming.]
I myself have done a very similar analysis (but far less sophisticated) to that in Foster/Rahmstorf using nothing but Excel and manually tuning the magnitude and lag of the ENSO, AOD, and solar forcings. It’s quite easy to do, and while it’s not worthy of publication, anyone with Excel and a small amount of statistical knowledge can verify Foster & Rahmstorf’s conclusions for themselves.
It’s so easy that it ought to be required of anyone who wants to be taken seriously in a discussion of climate science. Anyone who comes away from the analysis still thinking that human activity isn’t responsible (simply extending the manual lag tuning technique to CO2 correlations provides a strong pointer to CO2 being the driving factor for global temperatures, not the other way around) had better have a rock-solid argument to not be laughed out of any serious discussion.
How can there be different lag times between temperature series of the same type (GISS, NCDc, HadCRU)?
[Response: The lag is estimated by the analysis, as the value which gives the best-fit model. It’s an approximation based on the data, different data sets give different approximations.]
What are you using for TSI? You provide links to the other datasets. I found this, which has different composites.
[Response: We used PMOD TSI. The values we used were kindly transmitted to us by Dr. Frohlich since their online data wasn’t up-to-date when we started the analysis.]
Also with TSI, how can we reconcile your TSI values which cause cooling with the forcings for GISS-E where recent TSI is always a positive forcing?
[Response: You seem to be confused. Of course TSI is always positive, as is its forcing. But TSI varies, when it’s higher it causes higher temperature, when it’s lower it leads to lower temperature.]
Why is AOD forcing flat after 2000, when the data you link to clearly has minor peaks after 2000?
[Response: Note that the AOD data are constantly being updated. We used what was available when we started our analysis (back in early February).]
I have ask this question before. Using my former astrophysicist eyeball mark I, I seams that some residual oscillation is present. Is-there any residual in the Fourier spectrum or this is purely stochastic? If this is real, could this residual signal be removed also?
The linear warming trend extracted from the data is extremely clear – excellent work, a great paper!
A question: Given the techniques used, would it be possible to run a regression against these contributors _plus_ various components of anthropogenic forcings (http://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-2-1.html), such as CO2, CH4, ozone, and in particular aerosols? Breaking the warming trend into their contributions? The actual contribution of aerosols to temperature is poorly constrained, and it occurs to me that this might be a method that could ‘back out’ the aerosol contributions, providing _some_ additional constraints on them.
CO2 data is readily available and accurate, as is CH4 – tropospheric ozone is less constrained by data, aerosols much less.
One of the reasons I ask is that a simple linear trend seems a bit overly simplistic, given issues such as aerosol variations, economic changes in GHG production, and the like. I don’t know if the data is sufficient to justify such an exercise, if it can statistically support more than a linear trend estimate, but if there are additional underlying non-linear variations it might be a interesting refinement.
[Response: The linear trend is only an approximation, but since the residuals were scrutinized for nonlinearity it’s certainly a good one for this time span.
Mathematically, of course you can regress temperature on anything you like. But when you include a lot of predictors you can run into problems with collinearity (when predictors are very similar). Another approach is, instead of using a linear time trend, use net climate forcing estimate — as was done by Lean & Rind (see their papers listed in the reference list). This enables you to include a longer time span, for which the temperature trend is demonstrably nonlinear.]
Congratulations on the paper! Fantastic post as usual, crystal clear – for those of us willing to entertain logical, rational thought anyway.
Tamino, did you test for non linearity in the effects of the exogenous variables? Or was the data set to small to do this with much power since any quadratic effects will necessarilly have some colinearity with the linear effects?
I’ve seen a number of “sceptics” try to use the Foster & Rahmstorf paper to get low estimates of global warming and sensitivity.
Often these estimates rely on extrapolating forward a century in time. While estimates of temperature rise is consistent with a constant rate over the past three decades, extrapolating forward a century strikes me as risky. (Certainly extrapolating backwards a century doesn’t work.)
Also, as discussed previously, presumably one has to be careful about using the CO2 and temperature over the past 3 decades to estimate climate sensitivity as neither are in equilibrium.
I would welcome comments on both of these points.
Extrapolating forward to the end of the century give an increase over present temperatures of about 1.5° C over current temperatures. I don’t see how this fits in with extremely low sensitivities.
Given the “expertise” of the relevant “sceptics”, I’m sure their low estimates of sensitivity arise from more than just one error or suspect assumption.
Now that I’ve taken a little time to understand what was being done, I have a question-and apologies if I get the language wrong: Can any of the coefficients of the forcings (noise) be used to improve GCMs or GCM scenarios….. as opposed to extrapolating this linear section of what may turn out to be non-linear over longer periods of time? Or would this be taking away from the physical basis of the GCMs? Or could it be the basis for a hybrid GCM, where you use scenarios of TSI, historical averages for AOD and physics based ENSO to generate MEI?
I’m not sure it’s a good idea to estimate CO2 sensitivity from the final graph. The line you show could sill be influenced by the “slow-response” component of the response to the “eliminated” forcings, right?
This is such a straightforward analysis it is suprising it hasn’t been done before. Does the paper offer an error term around the 0.16C per decade?
It summary though this sounds like good news in that warming is a bit below the IPCC low estimate and makes the “It’s too late .. we will be 3.5C warmer by 2100 ” disaster scenario less likely… or am I missing something?
[Response: IPCC projections do *not* forecast steady warming throughout the century, or even over the next several decades — that’s a misrepresentation often claimed by those who deny the reality and/or danger of global warming. They project “about 0.2 degC/decade” (note the “about”) over the next several decades, but it’s expected that the warming rate will increase from its present value. And (also contrary to misrepresentations) a lot depends on what we do.]
Thank you for the comments, which are broadly consistent with what I had expected.
My discussion with Australian “sceptics” follows an article I wrote that features a parody of a “sceptic” climate model: The Manchester United Climate Model. The Man U model “explains” about half of global warming and features several of the missteps that Tamino and others have flagged as dodgy in “sceptic science”. http://theconversation.edu.au/how-david-beckham-caused-global-warming-the-man-u-climate-model-4548
What happens if you do the same thing with land-only data (for both surface and satellites)? Does it reduce the apparent gap between the two?
[Response: I don’t know. Interesting question, though.]
I may be a little late to this party…but I did have a question for clarification (I’m sure you probably already included this in one of your diagrams, but I’m missing it somehwere).
In so many of these charts that graph the global temperature anomaly, the degree of this anomaly is always different (depending on the data set being examined and the reference points). I suppose it’s too much to ask for all these outfits to use the same reference points, so that the anomalies can all look apples-to-apples-y. I believe you have done this in your ‘raw’ and ‘adjusted’ graphs (correct me if I’m mistaken). Your assessment puts the current anomaly at 0.5 or 0.6-ish, and the ‘adjusted-to-filter-out-everything-but-human-things’ at 0.3-ish .. is that right?
If I may, I have a follow-up or two (anyone can answer this if they know)… I had read that the athroprogenic portion of the anomalous warming is about 75%, with others saying that it could be over 100% considering the possibilities of the net of all other effects were to be cooling. How does this research compute the partial portion percentage of human-causes to the total? I see that the conclusions is to make clear the reality of increasing human contribution, but I’m wondering about this proportion to the total as computed by your research.
Also, I sometimes see differences in various websites depiction of the ‘raw’ data (I don’t know why). For example, I think your chart of ‘raw’ UAH data is not the same as Dr. Spencer’s (and I think he’s the one “in charge” or whatever for that data). Can you give me some clarity into this? The differing anomaly magnitudes for the same data with un-explained reference years makes it real confusing for me when I encountering them on various websites. Thank you.
First, in science, it is important to keep one’s eyes on the physically relevant quantity and avoid being distracted by irrelevant information. What matters in terms of anthropogenic climate change is the long-term change, or trend.
The uncertainties as to the amount of warming attributable to CO2 are at some level a sideshow. We know pretty well via many independent lines of evidence that you get around 3 degrees per doubling of CO2. So whether other forcings are contributing positively or negatively to the warming we’ve seen isn’t central. The question of how much warming is “in the pipeline” is also interesting, but not central to attribution.
As to discrepancies in the datasets, they are all publicly available and should be consistent if obtained from the official sites. Can you be more specific?
I’m not sure how specific I can be without being rejected for spamming too many links. I can try though. Tamino’s “raw” graph up above depicts ‘temperature anomaly’ as being at 0.0-ish to 0.6-ish depending on data set. There is no referece years given for that graph (that I can see by the graph)
GISS has it’s own graph (I’m assuming it is ‘raw’): http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.gif
Although it appears to end with more or less the same 0.6-ish figure, the other peaks and dips don’t seem to line up as well.
Another point of reference, Dr. Spencer’s UAH data:
It shows a 1998 peak near 0.7, rather than the near 0.0 shown in Tamino’s “raw” graph… (there are other differences as well that I would expect wouldn’t exist if everything were the same).
Here’s another graph that compares a bunch of them:
But only has a 0.1 anomaly
My point specifically addresses this important long term trend. Is there a commonly held reference-point for this anomaly data (and commonly held data itself)? I would be expecting everyone to show the same thing for the same data sets. It doesn’t matter to me that each different data set has its own fluctuations and assumptions (they all are showing more-or-less the same sorts of trends)…But if we can’t even consistently present a single data set with its specific anomalies, how can that be a good thing?
And…to my other point…It may not be ‘central’ to you to discover the participatory component of Man-made CO2 as gleaned by Tamino’s work, but I find it highly interesting– because it appears Tamino’s work is designed to provide an answer to my question. I was merely asking what it is. I agree that current understanding posits a 2.5-3 C temp increase from a doubling of atmospheric CO2 (and that temperature increase due to athroprogenic CO2 is undeniable, etc.), but Tamino’s work has been specifically conducted to filter out the present observation of current anthroprogenic contribution relative to the whole. To me it is interesting ideed…so I thought I’d ask, and still hoping for a response.
[Response: There’s no single “reference point” (or “zero point”) for temperature anomaly time series. In fact each of the 5 major data sets uses a different zero point, chosen as the average value during a baseline period (and each uses a different baseline period). So, each data set is offset from the others by an arbitrary constant, which really doesn’t give any information about how temperature has changed. It’s common, when comparing data sets (as in the last graph to which you link), to re-set the data values so they’re all on a common baseline.
The important thing is the temperature change, so the choice of zero point (and of baseline) isn’t very important.]
As stated in the paper re the ‘Raw Data’ plot:
“Annual averages of the monthly data from all five sources are
shown in figure 1. All have been set to the same baseline
(the entire time span, January 1979–December 2010), then
offset by 0.2°C for plotting.”
Perhaps a bit OT for this thread: which method is the best for recalculating to another baseline?
Because the five monthly time series for world temperatures are anomalies from the mean of the same months in a base period, I am used to rebase every month separately into 1981-2010 anomalies, in order to compare them. Calculate mean anomaly over Jan in the new base years, and subtract that from all existing Jan data. Do the same for Feb. Mar etc. This results in every month as well the year having a mean of zero anomaly over the new baseline.
Another method is to calculate the mean anomaly of all months together in the new base years, and subtract that from all monthly data. The mean of yearly data in the new baseline still has zero anomaly, but the monthly means probably not.
The second method could reveal that some months warmed/cooled more then others months between the original and new baseline. The first method would ‘hide’ that information. But there are other methods to investigate that. The first purpose of a baseline is to set some ‘normal’ values for months, seasons and years. And I guess all five time series for world temperatures started with the first method. As far as I know only UAH has changed the baseline recently, using the first method described above.
[Response: The difference between the two methods is small, but I’d say the first method (separate values for each month) is better. As you say, if you want to study how the seasonal cycle has changed there are other ways.]
The Foster and Rahmstorf paper is not really an attempt to extract sensitivity–there are too many confounding factors. Several analyses have recently been claiming that aerosols are masking a significant amount of warming. This doesn’t mean we’re off the hook. Aerosols last at most a few years, so once we stop burning fossil fuels the warming would come back with a vengeance. Then there is the question of warming in the pipeline–basically, warming must continue until we reach top-of-atmosphere radiative equilibrium again. We don’t know how long that is, but it is certainly longer than 30 years.
I’ve only just got round to reading the paper this afternoon. Thanks, it’s a worthwhile and useful paper. What interested me most going into it was the recent 2000 to 2010 period and the claims that temperatures have been level over that period (which seem to be based on CRU). Figure 3 seems to shed some light on this.
In table 2 you (and Stefan) account for the changes over the whole 1980 – 2010 period, where the variance for solar is the least, behind ENSO & AOD. However looking at figure 7 it seems that most of the change opposing the AGW trend in the last decade (2000-2010) is due to ENSO with a small contribution from TSI. You do this for GISS & RSS (the two datasets I tend to prefer). But am I correct in suspecting that for 2000 to 2010 CRU would have a greater ‘cooling’ contribution from ENSO and TSI over this period?
“Aerosols last at most a few years, so once we stop burning fossil fuels the warming would come back with a vengeance. ”
I recently blogged on that issue about work by Kyle Armour and Gerard Roe,
The original papers are linked to on that page.
Tamino – this is very useful science in the political policy arena as much as it is good science in the climate science arena. It’s in the former that mustering the motivation to do the minimum necessary has to be Fostered. (Sorry couldn’t resist.)
On the matter of true identities I’ve decided to abandon my own pseudonym of “Ken Fabos” in favour of my true name; I’m not ashamed of anything I’ve said… well mostly; I probably have said a few dumb things over the years. But I’m not afraid to apologise when I do get it wrong.
The BS denial thing needs slapping down and the best weapon of all (especially for someone who thinks violence is mostly ineffective and counterproductive) is the truth.
None of the foundation institutions of our modern civilisation seeks truth more effectively than Science. In it’s absolute form Truth may always remain elusive but modern climate science is a jewel in the crown of human achievement. If we screw this up it won’t be a failure of Science.
Very interesting article. Is it possible for you to extend this grapic you’ve posted to 2010? https://tamino.files.wordpress.com/2011/12/figure06.jpg
[Response: That would involve computing the trend from Jan. 2010 to Dec. 2010. Such a trend would be utterly meaningless. Any trend from later than 2005 to the present is too unreliable to be informative. Extending the graph will cause the estimated values to vary wildly while the error range expands greatly.]
Are you any relation to Karina Fabian, the anthologist?
Not that I’m aware of. If so it must be very distantly. I do like a good SciFi story though – when I ever get time. Had to look up ‘anthologist’! An editor as well as writer is she?
Family history has become an obsession of sorts too, but saving the planet and civilisation with it has become the major pre-occupation; Tamino has, IMO done more than most on that and this particular effort under discussion may be a very significant contribution. The oft-used denier line about house of cards comes to mind. And I note a tendency for their accusations about climate science to be reflections of their own true nature.
Tamino: nice work, and nice presentation. A suggestion for a bit of additional analysis, and a particular graphical display (for the web). In the third graph provided above, the larger error bars on the right are attributable to the short time period. Presumably, if a trend was done for the first five years only we’d see similar error bars on the left – i.e., if the analysis started at the end of 1983 when there was only five years of record available (1979-1983). The entire analysis could then be repeated each following year, with an additional year’s data, until you reached the full record displayed here. You’d end up with 23 analyses, each going from 1979 to “end of record”, with “end-of-record” increasing from 1983 to 2010.
The graphical display I’m thinking of would be displayed using an animated GIF, where you see the analysis develop from a single five-year period to the full period. Visually, you’d then see the trends on the left converge to relatively steady values with small error bars – i.e., you’d see how their values improve as the record gets longer. This would help emphasize why the current numbers on the right are not necessarily reliable (due to their short period), and how we’d epxect them to change as more data is collected.
troyca is discussing your paper, WRT Spencer and Braswell 2011.
tamino, I’m writing a post on this paper for Skeptical Science. Any chance you could email me the data for Figures 7 and 8 in the paper?
[Response: Of course. I’m also preparing to post all the data and code (in R) used for the computations.]
[Response 2: The latest post has the code and data (including adjusted temperature series) downloadable. I hope this meets your needs.]
Has any analysis been done of the hindcast skill of the model pre 1980. There is no mention of this in the paper and if the skill was good it would be an excellent validation. TSI and ENSO are reasonably well documented although aerosol is more problematic I think and indeed still debated for 1980/ 2010
[Response: The model assumes a linear growth due to global warming throughout. For the period studied this is certainly true (as tests for nonlinearity showed). But for earlier data it is demonstrably false, so the model will fail to reproduce the global warming signal. That’s why Lean & Rind, who did their regression over a much longer time span, used climate forcing rather than linear trend to model the global warming trend.]
The link below fits over 1980-2010 (the assumed linear section) but using sunspots not TSI (TSI is not available back to 1950) and then displays the residual chart over the period 1950 – 2010. If the assumptions are valid for 1980-2010, then is no reason why the coefficients cant be used back to 1950.
The code can be duplicated by downloading the code/data associated with the original paper, and making the additions and one change described to the right of the chart. I have done it only for one dataset, but it can be done for the other two datasets that extend back to 1950.
Can I have the raw data and source code? As a software developer I would love to do a numerical analysis and duplicate the results myself. I am facinated by the + D * time + term and would love to see the source code on how this has been calibrated. Science is only meaningful if other people can reproduce the results, and have access to all the processes involved in the original research. Otherwise the research is quite meaningless.
[Response: Is this a joke?]
The Dunning-Kruger is strong in this one…
Daniel, without knowing who I am, my qualifications, or my experience, it does you no great service to make such a personal attack. One again, the science suffers, and personal attacks reign.
[Response: Perhaps he interpreted your statement that “Otherwise the research is quite meaningless” as unjustified, ignorant, flippant, and downright insulting. As did I. Your indulgence in personal attacks makes your protest hypocritical.]
Chris, when you place a comment of the nature you did, without knowing whom Tamino is, his qualifications, or his experience, it does you no great service.
is quite telling in that it reveals you do not understand that one does not need “code” to be able to confirm results. A description of methods, data sets used and procedures followed is quite sufficient.
Thus, I interpreted your remarks as Tamino did: unjustified, ignorant, flippant, and downright insulting.
But feel free to interpret my remarks as you see fit: no code or data should be necessary.
Does the ENSO include the PDO, and if not, how does this change the analysis? Can or should it be added? If the PDO was mostly in one phase during that time, do you think that changes the analysis?
dhogaza essentially invited me back. Thank you!
I now also have the code and data, and while it may take me some time (this is not my day job, I am married and have other interests), I will be able to run and post the results from 1950. I will need to use sunspots not TSI, as TSI doesn’t start until 1976, but it is all in the data file provided and the changes to the source code will be minimal. RSS and UAH obviously won’t be included. I have no idea what the results will look like, and promise I will post the results, regardless of what the graphs look like. I might even add a CO2 correction as per the IPCC if I get the chance, which should be even more interesting.
[Response: The global warming trend since 1950 is not even approximately linear. I recommend against using this methodology in that case. See the papers by Lean & Rind, as has been mentioned before.
Frankly, I don’t trust your ability to be objective, I think you have a very strong bias, and whatever result you get I expect you’ll misinterpret it. I suggest that maybe this blog isn’t the right place for you.]
I intend to only fit the data from 1980 as done in the paper, which assumed linear trend since then. That will give the coefficients for Sunspots, MEI and aerosols. Then to display results for the whole period from 1950, showing the temperature corrected for sunspots, MEI and aerosols, exactly as per the paper. The only difference is to use sunposts (sunspots are at least a proxy for TSI) as TSI is unavailable, and to extrapolate the results back to 1950. If the coefficients for the parameters are correct, the residual may not be linear before 1980, but it should still be qualitatively correct. It will be instructive at least for me, and when I post, I will do no more or less than present the charts. I won’t comment one way or the other. If I have bias (and I suppose we all do), you should be prepared to at least try to convince me otherwise, and not just sledge me.
I attended a university ranked in the top 1% of universities in the world, and received distinctions, not just for every subject, but every examination. I studied general science, and I studied both CO2 induced global warming and ozone depletion at university.
I am neither uneducated, or stupid. This is the first blog that I have participated in because it is my field, it is something I completely understand, and something I can reproduce and developt on my own computers.
I have been surprised at the assumption that I am uneducated and stupid, simply because I seek more understanding. How many other people on the blog have bothered to run the scripts and look at the raw data?
[Response: You came here demanding that I release the data and code, saying outright that otherwise the research is meaningless and you would assume it’s a joke. Screw you. You didn’t even bother to look around the site enough to notice that it was in the most recent post. That clearly indicates your intention to find fault whether it exists or not. Your “innocent” guise rings false.
It doesn’t flatter you that you failed to note the data are referenced in the paper, and all of it are available on the internet. I didn’t have to post it for you, you could have got it yourself.
And it’s rather telling that you didn’t attempt to reproduce the result YOURSELF. The methodology is described in the paper (as well as the data) in more than sufficient detail for you to write your own goddamn computer program.
You got the cold shoulder because you came in demanding something that had been posted days before, blustering about how useless this research was without it, in grossly insulting and extremely arrogant fashion. Now you claim that you were insulted “simply because I seek more understanding.” I don’t believe you.]
I do not think people are assuming you are stupid, but your comments here betray a deep ignorance of how science is actually done. You don’t do it by bughunting in somebody else’s code, for instance. You independently replicate the analysis based on their description and available data. This is basic.
Second, if you come in here spouting denialist memes, (e.g. it’s all the PDO), then it isn’t too surprising that some people will think you are a denialist.
You say that you studied climate, but what you are saying doesn’t support your claim.
In addition to Tamino’s response, Chris, you should probably understand (since this is your first blog) that the opinion-making atmosphere that surrounds climate science is full of anti-scientific crackpots who, by virtue of a public that has little time, training, energy, and/or motivation to understand the science, are given equal (or better) footing with working, published scientists and statisticians. These crackpots often display the same type of behavior that you’ve displayed. It’s like an online version of Asperger’s. No one here can tell anything about you and your abilities and intentions except through your words, and your words suggest that you’re an asshole. I think I speak for everyone here (except those who are bored and looking forward to some entertainment) in hoping that you’re not.
Earlier you described yourself as a “software developer”, not a climate scientist. Hmmmm.
I am really sorry that I offended you.
That was never my intention.
I will post the results once I have them, and I will let you interpret them for me.
You offer sound advice.
Actually, you should be saying “I’m really sorry that I’ve been caught out lying, and I’ll understand if you never accept another word I say as being honestly stated”.
No its is not a joke. Why would you even think that, let alone post it.
Real scientists welcome real scrunity. Please make the data and source code available for me and others to analyse. Otherwise we must assume the original “research” is a joke.
[Response: Why would I think it’s a joke? Because I already posted the data and code, two days ago. Your willingness to assume the research is a joke, argues strongly against your ability to be objective.]
I do see the comment regarding the code and data, but unfortunately I do not see the post and or link to the data/code. Perhaps you can help. It is easier for me to understand source code, than to second guess the text.
The variables are not orthogonal, and I am interested in seeing how varying the coefficients varies the fit, especially as the aerosol data is problematic, and many people have used the “same” datasets to “prove” that the majority of the warming has been natural.
Prior to 2002 you could not reject the null hypothesis that warming was occuring, but since 2002 4 out of 5 datasets do not reject the null hypothesis of no warming, and I would like to investigate this more as well.
In 2011, I strongly feel that data and source code shoulde be made available with the paper, when the conclusions depend on software runs. There is no excuse to do otherwise, and when it is done, ones credibility increases dramtically. That you have done so, (especially if you help me with the link) is to be commended.
I certainly do not mean to offend, and apologise if you felt offended,
The truth is out there, and personally I don’t mind being wrong, if it leads to better answers.
I think one could perhaps be forgiven for questioning your competence when you cannot even be bothered to navigate to the front page of the blog and look at the first post entitled “Data and Source Code…”.
It is also quite clear that you haven’t bothered to learn even the basics of climate science. You will note that the “O” in PDO stands for oscillation. Oscillations do not give consistently rising trends in a system with real coefficients.
Finally, as to your contention of “cooling since 2002” or 2004 or whatever the denialist date of choice is, I would commend to you that it is not uncommon to find in the global temperature data periods where warming fell below the norm for up to a decade. Look at the trend from 1977 to 1987 (from HadCRU) then from 1987-1997. Both are near zero. Now look at the trend from 1977-1997, and voila, the trend is significantly positive. Short-term trends are meaningless in a system with lots of short-term noise and a long-term positive forcing.
I might suggest that rather than starting with “the code” you start by learning the science.
This will be my last post here as there is little interest here for me, but I would like to make some final comments.
1. The PDO cycle runs for 60 years or so. For most of the period analysed, the PDO was warming, so any analysis that attempts to attribute warming to natural and/or manmade causes that runs over just the last 30 years is unfortunately useless.
2. The paper is supposed to put a nail in the coffin of “no warming in the 21st century”, but statistically all that can be said is that since 2002 the data is consistent with no warming. End of story on that one – the charts above show that quite clearly.
3. Why not add the well known log(P/P0) relationship for CO2, instead of the time based coefficient – do we see a perfectly flat line. i.e. does it look like these four factors can then account for all the variation.
4. The PDO must be included in any such analysis, but as the author points out the raw data is just not there, and so results are unfortunately, just not very informative.
5. There is very little climate science in the paper – it is just basic curve fitting – best understood by analysising the raw data, and the algorithms used to fit the data.
The paper raises more questions than it answers, and in my humble opinion doesn’t shed any new light on the question of man made vs. natural warming, and the conclusion reached in the paper is not (again in my humble opinion) supported by the charts in the papers. Specifically the Rate (C/Decade) vs year when the error bars are considered.
[Response: I’m so surprised that you don’t find my work illuminating. Not.]
Any other straws you want to grab? Here’s how science works:
1)Arrhenius predicts burning fossil fuels will raise global temperatures.
2)Scientists debate back and forth until they understand the system well enough to see Arrhenius was right.
3)They note that aerosols are keeping the warming from occurring, resulting in some debate over which forcing (CO2 or aerosols) dominate.
4)The clean air act removes most of the aerosols, along with any doubt about the significance of CO2.
5)We note periods of seemingly lower warming, one of which includes cherrypicked dates from this century (there is also 1977-1987 and 1987-1997, even though 1977-1997 gives significant warming).
6)Tamino and others identify a set of forcings they claim explains the lower rates.
7)A real scientist might identify other forcings they think ought to be included as well, so they produce an independent analysis and either confirm or discredit tamino’s analysis.
Note that this does not mean some assclam seeing how many forcings he can bring to bear to minimize the role of CO2–that’s an established fact.
Ray … this is a declaration that CC is not interested in learning.
In some sense that’s a victory … though not a useful one in that he’ll keep spewing idiocy elsewhere, rather than learn. But he didn’t come here to learn, but rather to insist that science is a fraud, while software development gives one the only True Insight necessary (“code not available? science is false!” “code’s available, but I can’t find the obvious link on my own? FRAUD!”)
WUWT were able to help with the code/data link. Stupidly, I thought the data and code would be on the same page as this, and didn’t look elsewhere. My bad. I think it is a real credit to the authors that they have made this available, and I stand by my original comments that it is easier for me for to analysis the source code and raw data, than analyse the text. That may not be the case for you. Fair enough. All of the essential information for me is contained in the code and data.
Any analysis such as this must also cover all the 20th century, as the first half has been attributed to mostly natural causes and the last half mostly man made, and so I again stand by my comments that a 30 year analysis period is too short to be of any real value. If you feel differently, again fair enough.
I have understood the paper, and reviewed the code, and it does not add to my understanding of the climate over the 20th century. It uses too few variables, over too short a time, and the lag model may be too simple. For me, it leaves too many unanswered questions. That is my humble opionion, I don’t mind if you disagree with me. That is what these blogs should be here for – discussion, not self congratulation and worship.
I look forward to the authors expanding the research over the whole 20th century, and including some more parameters. In my humble opinion, this paper should be just the beginning, and not the end.
Chris Carthew is like a turducken. You take a red herring, you stick it inside a straw man and then you insert the whole thing in a douchebag by the name of Chris Carthew.
Seriously, when I edit out the red herrings, straw men and general douchebaggery, I get the empty post.
Lets try this one more time as if Chris Carthew were not a lying troll. The whole point of the exercise was to find what recent temps looks like with the obvious noise removed. That is it.
There is no need to remove the noise earlier because we have enough data to find the trend with it in there. So Chris has in several thousand words, not made a single useful comment.
“and it does not add to my understanding of the climate over the 20th century.” Not surpising as $-\infty +c = -\infty$ for any real $c$.
[Response: F&R2011 sticks to the recent time period because the trend is at least approximately linear, and because we wanted to include satellite data sets which don’t start until recently (that’s why the analysis starts with 1979). Those who simply must know what the application of similar analysis gives for prior data should read the papers by Lean & Rind (which are referenced in F&R2011).]
I don’t think this research covers the type of aerosols you are referrring to. Can you confirm that?
[Response: Didn’t one of your previous comments say it would be your last? Hope springs eternal.]
There is this minor point that two of the temperature datasets don’t start until 1979.
Your complaint seems to be that they answered a question that’s not the question you’d like to see answered, therefore it’s not useful.
That’s not of any real value.
There are many claims out there that global warming’s slowing, has stopped, etc referencing the satellite data era. Asking “is this true?” is a perfectly legitimate question, and answering it is of real value. Maybe it’s not of real value to *you* (though I’m curious as to why not), but in an broader sense, it certainly is.
In science, this is nearly tautological …
@ Chris Carthew
If this is indeed your belief then why are you a habitué of WUWT where all of those things are commonplace…along with character assassination and worse?
Indeed, with every comment you reveal the pressing need to actual learn more about the science of climate change. Else the analysis you are attempting will be lacking an understanding of the physics and physical mechanisms underlying the oceanic cycles whose acronyms you cite (hint: “O” is for oscillation and thus indicates a mechanism essentially neutral in its effects). And more refsembling curve-fitting and not analysis yielding understanding.
Or recriminate more. Your call.
That’s not really the point. It’s not adding to our understanding of climate science, it’s giving a clear rebuttal to those who argue that the GHG-forced warming trend doesn’t match what climate science leads us to expect. That’s useful.
It doesn’t claim to try to account for all sources of natural variation, but the results make it pretty clear than those that are left aren’t imposing a lot of noise on the underlying signal …
Take that up with Tamino …
About what? There’s only one question that matters, you know … is the range of climate sensitivity to a doubling of CO2 put forward by mainstream climate science likely to be right, or not? While there’s ample evidence that this is true, this work rebuts the claim that the last 30 years of data shows that it’s not.
Not that it’s not been rebutted in the past, or even seriously considered within the field, to be honest …
Chris … Here’s Lean and Rind 2008
They just use CRU, so can go back further in time (i.e. the full length of that dataset).
F&R 2011 explores what the trend looks like with three major sources of noise removed for the five major temperature datasets, two of which are derived from satellite data which, as has been patiently explained, only goes back to 1979. Showing that they’re in good agreement when these three sources of noise are removed is in itself interesting.
I’d say that to an objective eye, Lean & Rind 2008 and F&R 2011 complement each other nicely …
Chris Carthew, Couldn’t stick the flounce, huh? You have not understood even the purpose of F&R2011. The authors show that you can understand the time series with relatively few forcings–that is, a simple model has adequate explanatory power. Ever hear of Occam’s razor?
Simple models usually have greater predictive power–and certainly they tend to be more falsifiable than complicated models. Might I suggest learning some of the science before wading into the code?
Were you really unable to find this post?
I’m sorry but I find this beyond belief. How much else that you have told us should I also consider extremely implausible?
hint: “O” is for oscillation and thus indicates a mechanism essentially neutral in its effects
Perfectly true when you analyse over a whole oscillation. If the analysis is over 30 years, but the oscillation is 60 years, there is the potential for bias, and this should at least be discussed.
I have had a quick look at the paper. Do you know why they choose ENSO over PDO?
Chris Carthew, in case you are still wondering why you get hammered from all sides on this site. If you wanted to check FR2011 as a scientist you would first of all read the paper, aquaint yourself with the content and neccessary background. Then you’d stop and think about it. If you want to replicate the result, you’d get the data, which is indicated in the paper and freely available, and write your own code. You’d check what the influence of reasonable choices in your analysis can be made and what alternative data sets may make sense. That’s what a scientist would do.
Instead you could demand a copy of the code and the data from the author, however this limits your checking to see whether or not the authors made some dumb mistake in their code. No idea how you want to do that, w/o reading the paper and w/o understanding what the code is supposed to do in the first place. The code could be intended to integrate a function and differentiates it instead. You wouldn’t catch that from just looking at the code, as you don’t know what the code is supposed to do. So this way you are not testing the science, but whether the code is error free. Is THAT really what you want to do? I find it telling that you demanded code/data on the 17th, but only on the 20th you took “a look at the paper”. And that is part of the reason, why you catch that much flak here … deservedly so.
This is answered in a couple of other places on the thread, just search for “PDO” …
Chris Carthew reveals an inability to do even the most basic research – can’t find the data or codes without someone pointing at them for him.
Surely he’s embarrassed himself enough here that he should take his numerical analysis some place else where he can embarrass himself some more without having his nose rubbed in it.
So, roughly 0.16 deg.C/decade. Does anyone have a model that was built without using any data from this period that could be compared to this finding? That would seem to be a reasonable test of the model (at least that’s how we do it in automatic speech recognition research).
Although it makes me stupid for having even read it… see the following:
Apparently “Frank Lansner” takes issue with FR2011…
*smacks head off wall*
Ugh. Every visit to WFTWT is an exercise in mental anguish. I’m somewhat surprised it hasn’t sucked itself into a black hole of its own moronitude.
The methods used in reaching this conclusion have been criticised over at the Watts Up With That website. Care to comment?
[Response: WUWT has criticized it? I’m shocked.]
There are two significant points in the WUWT piece. The first is that there are more variables besides the ones used by the paper. They have a point about AMO and possibly PDO( although this correlates somewhat with 3.4 El Nino). I am not sure about Aerosols.. I though the paper included aerosols in a combined volcanic/ aerosol index. Is this so tamino? The second is that demonstration of what the results of the adjustments do to the unadjusted temperature series in the period prior to 1979 which would be interesting to see. especially as the paper reference which does try and do this accounts for 76% of the variability. There is lots of crud in the WUWT piece as well but then that’s web sites for you! Some of the comments here are less than scientific!
My take on Lansner was a lot different.
I think Lansner’s critique of Foster & Rahmstorf 2011 is a classic. If you’ve not the time or stomach for the reading, here’s what it says.
Lansner dislikes the use of MEI to represent ENSO & TSI to represent solar variation instead plumping for SOI & SSN respecively. Strangely, then Lansner duplicates the F&R 2011 results graphing the underlying temperatre trend, this dispite his solar having twice the effect & his ENSO nigh-on zero effect.
However, you cannot debunk work by agreeing with it so Lansner raises some objections. The impact of ENSO (that had no effect in his analysis) was wrong and invalidates the F&R 2011 study. And where was the correction for the all-important PDO which is the cause of the 1979-2000 warming. Strangely, the warming Lansner had graphed 2000-10 earlier disappears as Lansner fits a PDO-generated curve slap onto the underlying temperature trend (now missing the ENSO correction that had no efect), in the process demonstrating that the PDO is driven by SSN!
And how can F&R 2011 ignore anthropogenic aerosols (a man-made forcing), coz they’re very significant to temperature. And the AMO was ignored even though that caused elevated temperatures 2000-10! And what about before 1979 (before the TSI data began), F&R 2011 totally ignored that (while Lansner stops at 1950 where he runs out of data).
So, Lansner concludes boldly, The idea of CO2 induced warming over the last 30 years is “flat wrong”. And I think he might also have found a cure for cancer.
Undoubtably involving the smoking of three packs of cigarettes a day …
Great, maybe he can start looking for a cure for stupid–plenty of guinea pigs to try it on, including him.
Coincidentally (and I think I’ve mentioned this before), my dad, before he died of lung cancer a few years ago, swore that smoking cigarettes killed cancer. Smart guy until it came to his addictions and Truths–then his brain was oatmeal.
From Tamino’s OP:
“These are compared to three factors which are known to affect climate: the el Nino southern oscillation, atmospheric aerosols (mostly from volcanic eruptions), and variations in the output of the sun.”
‘WUWT is definitive. Reality is frequently inaccurate’
(I suspect that the Earth’s orbit has now developed a slight eccentricity due to Douglas Adams spinning in his grave)
Smith et al 2011 find that the decline in global sulfate emissions reverses after about 2000 due to increased international shipping and coal burning in China, although they note that uncertainties are high.
Click to access acp-11-1101-2011.pdf
An interesting exercise (i.e. “someone else do it!”) would be collate the forcings that went into the various AR4 models and compare them to actual/estimated forcings since those experiments were “frozen.” For example, methane was flat until 2007 as tamino has pointed out, CFCs are, of course, way down, sulfates are on the rise again, Solar has been lazy. How close are the expectations to reality?
One interesting thing about this analysis is that you should be able to now estimate at what MEI value, would UAH temperature hit or exceed 1998. A long way lower would be my guess. It would be a fair guess that an El Nino would be likely to develop some time in 2012 so results should interesting.
I’m very happy to see your work on the global warming signal in the sphere of published material. (FR2011 sounds so much fancier than ‘this guy’s blog’ ;) )
A great big thank you goes to you and Rahmstorf. This is a study laypeople (like politicians) can understand. People like you two change the world for the better.
There is a reason why Tamino’s co-author on the paper is a climate scientist. Without a basis in physics, it’s difficult to determine what most likely constitutes climate signal, and what is noise.
mostvirtually all of what appears on sites like WUWT amounts to little more than what Tamino refers to as ‘mathturbation’. It’s just arbitrary curve fitting, with no basis in the underlying physics. Anything you can do to (seemingly!) make the anthropogenic contribution to the recent rise in temperatures disappear is met with uncritical praise there.
So, you have Tamino’s code and data. But all you can possibly determine from that, if you’re a competent statistician, is whether the data were handled correctly. Beyond that, what could you possibly contribute to the picture that is useful?
Tamino : I would also like to express my thanks for this work, and all the work you do on your blog and elsewhere. I also greatly appreciate your style when responding to the likes of Carthew, leading him to declare a mote in your eye while being unable to observe (apparently) the forest-load of beams on display at WUWT.
For you, I have respect. For Carthew I don’t. There, I’ve said it.
Chris Carthew, Care to weigh in on how an “oscillation” produces a warming trend unprecedented in at least 20 times its oscillation period? And when you are done there, maybe you’d care to tell us how it causes simultaneous warming of the troposphere and cooling of the stratosphere. Neat trick that.
Do you have a link that explains what drives the PDO, and how we can predict it into the future, say over the next 60 years?
Once I know what drives the PDO, and other ocean currents, I will be better placed to comment.
CO2 shows a yearly oscillation, but very few (certainly not me) would deny the co2 levels have been increasing.
You are assuming that if we don’t know everything, we know nothing. That is not how science works. In science, you start with the causes/forcings that you know are significant. If these forcings explain the vast majority of the variability, that’s pretty good evidence that the contributions of other factors are minor. Now of course, this must be confirmed by verifying the predictions of the model, but the simpler the model, the more likely it is to yield good predictions.
You are entreated by David B. Benson below to “change your writing style for this form of communication.” I would add that you be mindful of the message you wish to convey to the audience here.
CO2 oscillates annually and the sun rises daily. Acclaiming for no apparent reason your certain agreement that “the co2 levels have been increasing” does not give the slightest reassurance because you would be some sort of total idiot not to agree. Thus making the statement raises questions rather than answering anything.
Likewise, too much interest in the PDO is usually an unhealthy sign of skepticism as such folk commonly employ it to support the nonsense they spout.
If you add this strange line in messages of yours to a level of handlessness/naivety you also seem to display, I would predict a rough time for you commenting on this website.
As for predicting PDO, the links below are from an obscure source called Wikipedia.
When we don’t know everthing, we should take that into our decision making process. The science can go on, but the politics should take a break.
CO2 oscillates, but goes up, so PDO can also oscillate and go up (or down). For PDO I don’t think we have enough data to decide that yet.
[Response: Do you even know what the PDO is? How it’s defined? How it’s calculated? I’d say clearly not, or you wouldn’t say things like it “can also oscillate and go up (or down).”
And that’s one of the problems. You presume to understand things you really don’t. Evidently you have a great deal to learn about climate science and about ocean oscillations before you can even get in the game. It doesn’t matter how smart you are or how good you are as a software developer — you’re a rank beginner in climate science so please don’t presume to tell us what can and can’t be happening. And don’t even think about telling the real experts how they might be wrong.
Another problem is that you clearly don’t know how science really works. You came here demanding that I release my code and data, claiming that “Science is only meaningful if other people can reproduce the results.” You don’t even know what reproducing results means! It doesn’t mean running my program and getting the same numbers I got. It means repeating the study independently, with your own program, to discover whether the method works.
Furthermore, your original comment was — there’s no polite way to put it. How come you demand the data when all the data are linked to and referenced in the paper? Did you even read it? Did you not notice the links to data in the post itself? It’s all out there, I even provided links to it, yet you demand I release it? As for the code, I released that two days before your “request.” But if you read the paper then you’ll find the method described sufficiently to write your own program. Or do you not have the mathematical savvy to do that? If not, then how dare you tell me that my result is “quite meaningless”?
The hubris you have shown, the staunch refusal to admit (to yourself) that you are simply not ready to criticize modern climate science, is appalling.
If you really want to learn about this stuff, step one is to admit that you are nowhere near knowledgeable enough yet. Step two is to read The Discovery of Global Warming by Spencer Weart. Step three is to find a decent book about climate science and digest that. Learn first, critique afterwards, cause right now you’ve got things ass-backwards. Otherwise you’ll waste your (and our) time with delusions of adequacy.]
Tamino, either Chris Carthew is what you have shown him to be (a rank amateur/beginner) or he is here to simply be an attention troll.
I suspect the latter.
“When we don’t know everything, we should take that into our decision making process. The science can go on, but the politics should take a break.”
Thank goodness, I guess I can “take a break” from nagging my brother to quit smoking, because doctors don’t know everything about the risks of him getting a heart attack or lung cancer.
Thanks a ton Chris, you’ve taken a load off!
Thank you for demonstrating the full depth of you immense wisdom. It is exceedingly encouraging to learn that the “PDO can also oscillate”. And I hope you are making good progress towards knowing “what drives the PDO and other ocean currents”. These days even disentangling the meaning of acronyms is beyond the abilities of some and being “better placed to comment” would undeniably help you greatly.
Do you have a link that specifically describes what causes the PDO, and so consequently how we can predict it over the next few decades at least?
There does seem to be some correlation of PDO with temperature records, so it would be great to elimate that (or whatever actually drives the PDO) as a potential contributor over the time period you analyse.
Just because it is an oscillation doesn’t mean that over time it won’t have a net positive (or negative) influence. To do so, would imply CO2 levels must also be static.
Can you provide the link to what causes the PDO and how we can predict it over the next cycle (say 60 years), or doesn’t anyone actually know?
Your whole post makes no sense. First, in order for an oscillatory signal to result in a net positive trend, some of the couplings to the signal would have to be imaginary. Is that your contention?
Second, just because you have two signals that rise in synchrony says nothing about whether there is any causation or which way the causation goes. PDO produces a rather weak effect on temperature that is not global. It is not as well understood as ENSO, or even AMO, but that in no way detracts from attribution of the linear trend in global temperatures to CO2 because the effect of PDO is weak. If it were not, then we would have expected to see similar rapidly rising temperatures in the past.
All you are establishing, Chris, is that you don’t have the foggiest notion what you are talking about.
@ Chris Carthew
Seriously? You really must do better in trying to actually learn something about what you’re trying to analyze BEFORE the analyzing part.
Sigh. Here’s the NOAA page on it: http://www.ncdc.noaa.gov/teleconnections/pdo/
Now read this (all 3 levels): http://www.skepticalscience.com/Pacific-Decadal-Oscillation.htm
and this (includes the data, seeing as how ye’ll be asking for it soon): http://jisao.washington.edu/pdo/PDO.latest
Lastly, do not think Tamino to be unversed in the PDO: PDO: the Pacific Decadal Oscillation
Chris Carthew said on 26 December 2011, at 11:54 pm:
inherent in the very definition, an oscillation will not have a net positive or negative effect. If the latter is occurring, it is occurring as a result of something other thanthe oscillation.
Your strawman about atmospheric CO2 concentration seems to be deliberately calculated to obfuscate the net consequence of an oscillation. Seriously, your ham-fisted paragraph reads “we know that CO2 levels are not static, therefore the DPO must be having a net positive (or negative) effect”.
Sonny Jim, if there is a net effect it is a consequence of the increase in CO2, and not of the DPO. You are confabulating separate phenomena.
You are not speaking with honest intent. And/or with basic understanding, when it comes down to it…
I note you describe your education as “I attended a university ranked in the top 1% of universities in the world, and received distinctions, not just for every subject, but every examination. I studied general science, and I studied both CO2 induced global warming and ozone depletion at university.”
So armed as you are with a qualification in General Science (with distinction) from a top ranked university, perhaps you could explain succinctly what you meant by saying “Just because it (the PDO) is an oscillation doesn’t mean that over time it won’t have a net positive (or negative) influence. To do so, would imply CO2 levels must also be static.”
I’m sure the readers of this comment thread would be most interested in your answer.
Well, yes, Chris, we all know we shouldn’t fly on commercial airliners because “we don’t’ know everything”, and that the politics should’ve taken a break many decades ago, because of that.
Yet, I still fly routinely on business.
And unless you refuse to step aboard a commercial airliner, you are a hypocrite. Pure and simple.
Boeing et all don’t even *pretend* to “know everything” about reliable flight. They engineer to mitigate against such ignorance.
Chris Carthew — You need to learn to change your writing style for this form of communication.
As for ENSO versus PDO, the former has a significant impact and the latter is too small to bother with as various statistical tests will show.
I do agree. My initial request was badly worded . In my defence, it was my first post, and I was greeted with :-
[Response: Is this a joke?]
The Dunning-Kruger is strong in this one…
Which is not exactly a friendly welcome either.
Had tamino just said, “here is the link…” I think things would have been different. I don’t feel I am the only one that was a little tactless.
Notheless I have the code and data, and have posted a chart from 1950-2010 using sunspots not TSI. In does not change the essential characteristics from 1980 to 2010.
At the time, I didn’t know tamino was the author.
It is easy for me to play with the numbers now, and for that at least I appreciate tamino sharing with us.
You were greeted with exactly the kind of response you deserved.
As a trained climate scientist, clearly Chris knows this. In which case, his question is dumb.
Though if he’s really just a software developer, he might not know this.
I guess it all hinges on which of his two stories regarding his expertise, posted in this very thread, you choose to believe :)
So, I have a quick question, after reading through this blog a bit. Then there will be more as I try and recreate this bit of data processing, but first…
I’ve been hearing more about this Pacific Decadal Oscillation being a supposed cause for the recent warming (last 30 years, as in the paper) but when I look at that data, the PDO appears to be trending downward for the last 30 years, which if anything, should be a negative forcing.
Of course, that’s some Wikipedia data, so let me know if that’s errant in some way. Exactly why are people citing this as cause of the warming?
[Response: The real reason is that they desperately want an excuse to deny that man-made greenhouse gases are the cause of the warming. Seriously — that’s why they do it.]
No. 2: ENSO looks like it exhibits approximately the same behavior…
…And since both are essentially measures of Pacific temperature anomalies, does including the MEI data not already “solve for” the PDO also?
[Response: No, ENSO (el Nino Southern Oscillation, which we represented with MEI) and PDO are different “modes” of variability of Pacific climate patterns. And as you can see if you compare their data, they do indeed show different changes.
By the way, ENSO is not really a measure of Pacific temperature anomalies. It can be measured that way (the Nino3.4 index is a common way to do so), but it can also be represented entirely as a surface-pressure-difference index (the SOI), and the MEI actually represents it with a combination of six variables, including pressure, wind strength, temperature, and cloud fraction. At its heart, ENSO is a geographic *pattern* of climate rather than an absolute temperature index itself. So, ENSO isn’t really “temperature” in the usual sense — unlike AMO (Atlantic Multidecadal Oscillation), yet another phenomenon often purported to be responsible for modern warming, again by those who are desperate to deny that it’s due to man-made greenhouse gases.]
To the deniers, climate science is as simple as ABC –
Copy that. Thanks for the ENSO clarification. I’m a chemist, so it simply makes sense that adding a fairly potent IR absorber like CO2 to the atmosphere would increase temperatures, but the climatology and some of the data processing can still baffle me. And yes, so far with the research I’ve done, it seems that climate science is becoming the victim of, possibly, the biggest disinformation campaign in history. They almost had me at first, but those denying the anthropogenic causation use a lot arguments that don’t stand up to a lot of scrutiny. Anyway, I’m gonna give recreation of your results a shot, just to enhance my understanding. Should be fun. ;) (Have I become a data junkie? :o)
As a chemist, just ask yourself this question: Would a legitimate scientific movement be able to accommodate people who deny that CO2 levels are increasing because of burning hydrocarbons?
You know, I think an unprecedented event has occurred. I’ve actually learned something from the efforts of Tony “Micro” Watts et al. The absurd claim that one must include PDO and all the other oscillatory phemomena to truly understand climate has a simple explanation. What all the “Fun-with-Fourier” mathturbators are doing is just approximating the rising temperature series of the past 40 years with a Fourier series! No need to assess how significantly the series affects temperature–just determine the Fourier coefficient for that frequency, add them all together and voila! You can approximate just about any finite series with a few critical terms, and all without resort to any of that messy physics stuff!
And there is plenty of literature finding frequencies of all magnitudes in some proxy or other, so as long as they don’t worry about physics, they can always come close if they use enough terms!
Of course what you say is true but the same criticism can be levelled at this paper. “Presume a linear trend and 3 variables with the right weightings should be enough to make it so”. I think the paper makes a set of reasonable assumptions but leaves out AMO and non volcanic aerosols. There is a NASA data set which combines all aerosols but this does not appaer to have been used ( Is this right Tamino?). AMO has also been excluded .. not sure why. Depending on what coefficients were chosen both would reduce the warming trend somewhat since aerosols are down over the period in question and AMO was broadly rising over the period. I am sure we would still have a warming trend though even with both these two added in to the equation.
[Response: Why do critics keep asking me what aerosol data set was used? Did you not read the paper? Was the reference — or the link to the data — unclear?
As for AMO, unlike ENSO (or PDO for that matter) it IS temperature. Pure and simple, nothing more nothing less. Attributing temperature change to temperature change seems kinda stupid.]
Colin Aldridge, did you even bother to read the freakin paper? The forcings used here were used precisely because they are the most significant known short-term forcings. F&R2011 shows that you can account for the vast majority of the short-term variability with just those terms, which are already known to be significant contributors.
Are you really so dim you can’t see the difference?
The trend is there, with statistical significance, in the data. It’s not “presumed”. The paper in essence answers the question “how much of the noise around the trend disappears if we account for three major sources of natural variability?”
Its not just timeseries that you can fit a fourier to. Aged about 90 my (now late) grandfather realized he could recreate a well known piece of art (my memory escapes me as to which and I was just an undergrad at the time so interested in other things) by overlaying different frequencies so he toddled off bought himself a sinclair spectrum (think that was the make back then) and coded it up – in about a day – from scratch, learning the code as he went along.
Maybe there’s some money to be made in the curve fitting by fourier transform malarkey?
Yep. One of my music school classmates used to create analog synth patches, the sole purpose of which was to make odd animal shapes on an oscilloscope screen. Same principle, executed electro-acoustically–and conceived, AFAIK, entirely without any knowledge of von Neuman’s famous quip about 4 parameters to ‘fit an elephant.’
(BTW, the guy’s a successful sound engineer, last I heard.)
“….the sole purpose of which was to make odd animal shapes on an oscilloscope screen.” Maybe that’s the rhyme or reason for Stockhausen’s music.
An analysis of the music of Philip Glass would probably result in a repeating pattern like a centipede.
Maybe for Iannis Xenakis, who supposedly had geometrical shapes encoded deep in the structure of his works.
My comment about a linear trend and 3 variables was meant to be “what a sceptic would say”
1 Of course I read the paper and Rind’s and a whole load of other stuff including the AOD reference in the paper .. hence my point about Aerosols vs purely Volcanic dust measures. The refernced graph is a volcanic dust measurement
2. You might just as well say MEI is just temperature since it is in effect a proxy for ENSO.
[Response: Bullshit. ENSO ain’t temperature. It can be (and perhaps most often is) characterized by a pressure-difference only index (SOI) which ignores temperature altogether, and MEI itself is a combination of 6 variables including pressure, wind strength, and cloud cover. AMO is just temperature.]
3. Of course there is an underlying trend
4. AMO is a long term sea surface temperature oscillation which was identified by Schlesinger and Ramankutty in 1994.. Since the gobal warming signal is presumed to be linear in the paper there should be no leakage from that signal into the AMO oscillation. Of course if the global warming is non linear then there will be as indeed there would be into MEI.
[Response: In the paper, the global warming signal is NOT presumed to be linear over the entire time span which was used to define AMO, which is why AMO as defined is contaminated by global warming. I’m hardly the first to say this, it’s in the published literature. And since ENSO (and MEI) are NOT temperature, no there won’t be leakage into those.]
5. Your comments are not exactly courteous!!
[Response: Your comments are not exactly smart.]
OK Lets assume AMO is “just a natural temperature oscillation” and ENSO/MEI is a set of variables that cause a natural temperature oscillatation.
The point remains that both are natural oscillations which should be adjusted for when calculating the GW signal. MEI/TSI and AOD are obviously 3 but why not non volcanic dust/pollutants and AMO. I recognise that both AMO which has been rising, at least until recently and pollutants have very probably been declining although opinion is divided, largely because of Chinese coal burning.
AMO and its detrended GW trended component can be found for example at
Tracking the Atlantic Multidecadal Oscillation through the last 8,000 years
Mads Faurschou Knudsen, Marit-Solveig Seidenkrantz,Bo Holm Jacobsen
& Antoon Kuijpers,, Published in NATURE
My point is why give skeptics un necessary ammunition by leaving AMO and atmospheric pollutants out!
Colin, you are not thinking about this clearly. ENSO is a short-term phenomenon that has a well characterized short-term effect on global temperatures and weather. Not so the AMO or PDO–the period over which we have data is affected by a variety of other influences, including CO2). We know that ENSO can dominate all other influences over a period of a year or so, while there is NO EVIDENCE that either PDO or AMO have a significant contribution.
The goal in science is to explain as much of the variation as possible with the simplest model possible. This gives you the greatest predictive power going forward. Start throwing in a bunch of additional terms of questionable significance, and you may fit past data better, but you will generally not fare as well on predictions. What is so hard to understand about this?
I had assumed that AOD was a measure of all aerosols, but that volcanic eruptions are the major cause of variation in this. Am I misunderstanding?
“… This study goes hand in hand with a journal paper by Markus Huber and Reto Knutti of ETH Zurich from last week’s edition of Nature Geoscience, which used modeling to tease out the relative contribution of natural climate variability from human drivers of climate. Their analysis found that it is “extremely likely” that 74 percent of the warming experienced since 1950 is anthropogenic in origin.
Two very interesting papers. Will they put the kibosh on refudiater predictions of imminent global cooling? ….”
Back on 18 December 2011, at 10:18 pm, Chris Carthew said :
This is an interesting strawman (amongst other things), because it reflects either Carthew’s ignorance of entry-level statistics, or his intent to medaciously misuse statistics, or both. Contrary to his dismissive hand-waving, there is no “end of story on that one” – indeed, page 1 hasn’t even been completed yet…
Chris Carthew, here are some genuinely serious questions for you. Given the variability in the raw data, how many years of consistent underlying linear trend would be required for the signal to emerge with statistical significance from said variability (noise)*? How exactly do you calculate such a determination? What does the result say about your comment of no warming since 2002? What do “the charts above show… quite clearly” that proper statistical analysis does not?
Further, given the variability of the raw data, what minimum magnitude of underlying increase would be required for a statistically significant trend to be detected in the interval 2002 to date?
For bonus points, can you answer the same questions in relation to the FR2011 adjusted data?
Seriously, answer these questions, and in particular do so whilst simultaneously and validly defending your claim.
Only then might there be any basis for saying that the first chapter has been completed.
[* Hint: you might usefully spend some time checking on some of the statistical background to one of Phil Jones’ comments, and in particular to the concept of p values that relate to that comment.]
Ok, we’ve all had some fun at your expense. Now it is up to you. Do you really want to learn about Earth’s climate (not just AGW, but the theory that illuminates Earth’s climatic behavior hundreds of millions of years, and just happens to predict that our fossil fuel emissions will warm the planet)?
Climate science is two centuries old. The concept of greenhouse gasses is nearly two centuries old, and the prediction that fossil fuel carbon would warm the planet is more than 115 years old. That current warming is predominantly due to our CO2 emissions is not in serious doubt. The amoung by which a doubling of CO2 will warm the planet is not is serious doubt. The only controversy is manufactured by anti-science idjits and libertarian ideologues.
There are reasons why it takes a decade or more to become a publishing climate scientist. You can start finding out what they know about climate that you don’t by first reading Spencer Weart’s history. When you are done there, just ask for more references. Most of us started with little more expertise than you currently possess a few years ago.
Further hint for Chris about those questions
> Given the variability in the raw data, how many years of consistent
> underlying linear trend would be required for the signal to emerge
> with statistical significance from said variability (noise)*?
> How exactly do you calculate such a determination?
You should also know how to determine the variability — rather than being given it by someone else — for any particular set of data.
Those are not rhetorical questions; not trick questions; not general broad hypothetical questions.
Those are specific questions you can ask — and answer — given any particular set of raw data, or for any data you collect for yourself.
Answering those questions yields a specific answer using straightforward arithmetic.
Do you need a pointer to a basic explanation of how to do this? That’s available so you can do it for yourself and check how it works.
I note that Chris Carthew’s stream of unconsciousness dried up after he was asked to actually put some analysis behind his claims. Simple questions, to which he had no answers.
Isn’t it always the way?
> Isn’t it always the way?
If only… there’s a variety that just blusters on. You don’t want to tempt fate….
“. . . stream of unconsciousness. . .”
Very apt. And the purpose here is to raise consciousness, after all.
OK, I’ve been banging my head against this for a few months, and I’m stuck.
In your article ‘Fake Forcing’, you show that regressing forcing against temperature gives the wrong result, in particular requiring the overly sharp volcano contribution to be scaled down. Yet in F&R2011, you do exactly that (bar a time shift, which does nothing to spread and lower the volcanic peak). That is the initial origin of my confusion.
In particular in ‘Fake Forcing’ you show that introducing a response function to convert the forcing into a forced response corrects the problem. So I tried exactly that in your code (applying just the fact response exp(-t/2.0) to both TSI and AOD), and the volcano term does indeed increase by a factor of about 2. Since both the volcanoes occur during the declining portion of the solar cycle, the solar term decreases by a factor of 2.5 to compensate, and thus the solar decline since 2002 is also reduced, leaving comparatively slow warming over the last decade (which Kaufmann et al 2011 would otherwise account for with aerosols). The stats look similar.
‘Ah ha!’ I thought. ‘I’ve found a mistake in F&R2011, which when corrected brings it into line with Kaufmann’. But I was wrong. Your solar influence appears to be in line with Hansen’s response function calculation in his 2011 energy imbalance paper, and also with Lean and Rind 2008, which can’t be so easily biased by the volcanoes because the longer data frame contains volcanoes at all points of the solar cycle.
So I was wrong and you are right. But I still can’t explain it, or reconcile F&R2011 with ‘Fake Forcing’. Can you offer any suggestions?
[Response: In the “fake forcing” post temperature is regressed against total forcing, so there’s no way for fast and slow terms to have different coefficients. In this post, volcanic and other terms have separate regression coefficients by design, so the regression itself accomplishes the compensation needed for the brief duration of volcanic forcing. It’s worth noting that since this post performs straight regression, without attempting to account for the time constants of the climate system, it shouldn’t be taken as giving realistic estimates of climate sensitivity.
The point of the “fake forcing” post was to show that using a separate coefficient for volcanic forcing is a clumsy but effective way to account for the time constants of the climate system, but it most certainly is not evidence of failure on the part of GISS ModelE. Unfortunately, whenever fake skeptics encounter something they don’t understand they interpret is as proof of incompetence, gross error, and/or fraud by mainstream climate scientists. They’re so intent on yelling “Gotcha!” they don’t consider the possibility that their own understanding might leave something to be desired.]
Thanks for the response. I should have been a bit more specific: My main contention is that first convoluting the solar and volcanic terms with the fast response term should give a better predictor of temperature than using the raw forcings. That’s what my physical intuition tells me. And I’ve shown in my calcs that adjusting the coefficients can mop up a slowdown in warming over the last decade, and applying the response function changes the coefficients to restore the slowdown.
I’m getting better at R, so I’ve now tried adding either a quadratic term, or a ramp from 2000, to the regression. For GISTEMP, both terms show a speedup rather than a slowdown in warming using the raw forcings!
However if my thesis is correct, then adding in the response function should change that. But the quadratic still shows a speedup of warming. The ramp function shows a very slight slowdown over the last decade, but p~18%, so it is nowhere near significant.
So my conclusions are:
– I can produce no evidence that applying the response function improves the results, and doing so produces a solar term which is unrealistically small when compared to multiple other studies.
– There is little of no evidence of a slowdown, even when we specifically include a term to allow for it.
I’ve thrown everything I can think of at this, and it all says that you’ve got it right.
Someone else has spotted the possibility of lagged volcanic cooling causing an overestimate of the solar contribution:
Interestingly, the number he comes up with for the actual impact of the solar cycle (0.02K) is the same figure I got from my version of your two box model.
I’ve been posed the following question about Foster & Rahmstorf by climate sceptics Anthony Cox and Tim Curtin on “The Conversation”…
” One way of showing that Mr Curtin is wrong would be to explain why temperature, and solar, can be detrended by first differencing but CO2 requires a 2nd differencing. It would be great if you could do that in the context of atmospheric concentrations of CO2 increasing at a linear rate while temperature detrended for natural variation to leave a pure AGW signal, according to Foster and Rahmstorf, shows “no indication of any slowdown or acceleration of global warming,”. ”
Am I correct the 1st and 2nd differencing was not used during the analysis in Forster & Rahmstorf? Instead, was it a multiple regression fit to the unadjusted temperature data (accounting for MEI, AOD, TSI and a trend with time)?
Er…if CO2 needs to be differenced twice to detrend it, it isn’t increasing linearly. So the question basically amounts to ‘explain why X is true, in the context of X being false’. Incoherent word salad is so much fun.
Thank you for the prompt response.
I should note that Anthony Cox asked the question on The Conversation, while Tim Curtin merely claimed (via email) that I did not know what the differencing method was.
The discussion that prompted my question is online http://theconversation.edu.au/we-do-need-drastic-action-on-climate-change-a-response-to-the-wall-street-journal-5059
From the same discussion thread, this time from Tim Curtin…
Michael: if I am allowed to say this, Foster & Rahmstorf actually needed to do first differencing in order to avoid the problems of autocorrelation. Had they performed the Durbin-Watson tests, this would have been confirmed.
…and I supect I know your response to this.
[Response: It’s absurd. We didn’t do first (or any) differencing, you don’t have to do so to correct for autocorrelation (read the paper), and first-differencing doesn’t actually remove the effects of autocorrelation. If this is the level of nonsense we have to put up with …]
Thank you and I agree with your sentiment. Mr Curtin is affiliated with the Lavoisier climate sceptic group in Australia and is well known for his errors (http://scienceblogs.com/deltoid/2009/03/tim_curtin_thread.php).
Tamino, I, too, would like to hear your thoughts on the notion of troyca that F&R 2011 lends support to Spencer & Braswell. While I like your paper´s eloquent separation of short-term noise from the real signal, AFAICS (which admittedly may not be very far) it´s true that your implied instantaneous surface temperature response of 0.57 C/W/m^2 to the TSI index, prima facie, does imply a relatively large influence of radiative noise.
I realise that there are multiple other issues with SB11 (no ocean, no El Niño, cart-before-the-horse errors with the cloud-ENSO relation, unfounded assumptions of clouds being a major forcing rather than a feddback etc.) which renders it poor anyway, to say the least, but your analysis does appear to contradict a core point in Dessler´s rebuttal of SB11.
I´m a bit sceptical about Troyca coining the term “cloud forcing efficacy”, since Dessler, too, explicitly notes that clouds are a feedback, and not a forcing, but nevertheless, even if the cloud response coefficient is just one tenth of your solar response, it still leaves quite some room for cloud fluctuation having a non-negligible impact (6 times Dessler´s 0.005C, if I am able to calculate correctly).
Has anyone estimated the transient climate response (TCR) from the Foster & Rahmstorf results?
I don’t post on the Conversation, but I think that it’s important for someone to call Tim Curtin on his egregiously nonsensical comment:
In no universe are polynomials a “better” fit to “pick up the highly non-linear role of ENSO and related oceanic decadal variations”. All that polynomials (especially higher-order ones) do is to follow the trajectories over the fitted time intervals, simply because a surfeit of parameters is used to allow the curves to snake every which-way. As soon as the fitted time period is passed in either direction, the polynomial almost always soon parts company with the real world – hardly “better”, certainly not informative about the nature of the data, and completely useless for projection.
Issue Curtain this challenge: pick any two-decade time interval, pick an order of polynomial with justification for doing so, and then fit the order to the temperature data for the interval. In almost no combination will a polynomial give a trend line that offers anything describing reality more than a decade beyond either side of the original input period*.
Curtin is neither a serious commenter, nor an academic one. The guy’s a mathematical/statistical clown who juggles numbers in the same way that the red-nosed version of the farceur juggles cream pies.
[* Oo, here’s an idea – if someone has a knack for applets, it should be relatively simple to construct a widget that allows people to do this with their dataset, their time period, and their polynomial order of choice, and with the whole record mandatorily superimposed. With a bit of fiddling that should give most people the opportunity to see how silly is the fitting of high-order polynomials to short intervals of data. Perhaps even WfT, if potential for misuse is tightly constrained?]
I had a PI in grad school that liked us to fit our data to high-order polynomials to show our sponsors. We never could figure out exactly what the hell he was telling those people, as the models were physically nonsensical, but yes, they would fit the curves well.
After reading this:
” In a serious academic discussion I think mentions of claimed statistical trends should be more careful than is evident here or in the paper by those idjits Foster & Rahmstorf 2011 which displays their ignorance of the 2nd derivative.”
you can only conclude that Curtain is trying to make up for foolishly walking into the public humiliation he received here:
on the topic of the second derivative. Who could forget it?
Any attempt to repair his creditability with idiotic outbursts like the one quoted serve only to sink it deeper into the septic tank. As transparent as it is pathetic.
And as for Anthony Cox, he talks like a lawyer who bought Statistics for Really Big Dummies a few hours before walking into court. Perhaps he has a pet “expert” witness tucked away somewhere (I have a theory about his identity…), but if so there’s something being lost in translation either between Cox and his ‘witness’, or between the ‘witness’ and reality.
Curtin is a world-class loon.
He posted a whole load of nonsense about fitting 5th order polynomials (in a quest for ever higher R^2 values) at the epic Deltoid thread a year or two ago as well, starting somewhere around #285 and #362 (http://scienceblogs.com/deltoid/2010/04/tim_curtin_thread_now_a_live_s.php#comment-2589900). I provided some feedback on the physical plausibility, and asked “Tim, for bonus points, figure out what happens with the 5th order polynomial as you run backwards in time.” He responded “Actually it picks up ENSO with perfection. Prove me wrong, and show me your graph doing so.”
So I did: (#405, http://scienceblogs.com/deltoid/2010/04/tim_curtin_thread_now_a_live_s.php#comment-2596641).
You can probably predict that this had no visible impact on his level of understanding.
“Second differences” made an appearance, certainly by #429 (http://scienceblogs.com/deltoid/2010/04/tim_curtin_thread_now_a_live_s.php#comment-2598762)
And IIRC that was the thread where he said numerous “interesting” things, including arguing that it’s a fallacy to say that rising CO2 is making the oceans more acidic because they are not currently acidic; that if there was enough CO2 it would cause such an ocean pH reduction that we could then use seawater for irrigation and drinking (#71, based on extrapolating a relatively simple interpolation model far beyond its original parameter space scope). Also: screw the sharks if they can’t cope with the new pH levels. And he’s a “we’re CO2 starved and more is unmitigated agricultural goodness” loon (which Burt Rutan appears to be as well, come to think of it).
The thread makes fascinating reading if you have a couple of hours spare and have taken the precaution of padding your head, desk and both palms and no familial history of chronic concussion.
“we’re CO2 starved and more is unmitigated agricultural goodness”
And the average American is fat- and carbohydrate-starved, too.
The “CO2 is plant-food” meme is one of the most mind-numbingly thoughtless ones out there, IMO, in that every single last one of us knows by now (even if we don’t act appropriately on the knowledge) that “more food” is not always a Good Thing. And with regard to nitrogen and phosphorus–just as “natural” and more immediately beneficial to plants–isn’t it a little more than ironic that they are the nutrients responsible for creating the anoxic “dead zones” in the Gulf of Mexico and elsewhere as a result of fertilizer run-off?
Yeah–too much plant food.
The gist of this is that natural factors are now well understood, so any departure from their net effect must be down to man – a figure quantified above.
[Response: No. The effect of these natural factors on global temperature is reasonably well quantified. But that doesn’t mean we know how to predict el Nino, or the solar cycle, or volcanic eruptions — we just know about how much they affect temperature.]
Which presumably means there is no longer any reason that reliable predictions of surface temperatures cannot now be made, say for 5-10 years time ?
OK, I can see this is a step in the right direction. But while we can’t yet predict eg el Nino, how well can we be said to understand it ? Like, what is behind it, and what other effects of these as yet unknown causes might there be…? The possibility thus exists that all or some of what is above attributed to man/CO2, could be something else/natural.
[Response: No it doesn’t. El Nino doesn’t show long-term trend like global temperature has recently. El Nino has been in place as long as we’ve been measuring it, and according to proxy evidence for a lot longer. Rapid temperature change such as is presently observed has not. And when the impact of el Nino (and other known factors) is accounted for, what’s left is not from el Nino — that’s the man-made global warming part.
The idea that some or all of modern global warming can be attributed to el Nino is a pipe-dream of those who don’t want man-made global warming to be true.]
Erica, think of it this way. The combined phenomenon of El Nino and La Nina is covered under the El Nino Southern Oscillation–oscillation meaning it goes up and down. That is not going to make temperatures go up and up and up.
Attribution is tricky, and there are many things we do not understand about climate. Greenhouse gasses, though, have some very distinctive characteristics that make them easy to spot in the paleoclimate record and in the current climate. As a result, they are among the forcings we understand best that are acting on the climate. The existence of the greenhouse effect has been known since the 1850s, and anthropogenic warming due to fossil fuel consumption was predicted back in 1896! What we do not know does not negate what we do know.
If you’d like to read more about the history of the greenhouse effect, check out Spencer Weart’s on-line history linked on the sidebar.
Yes indeed, the idea that anyone actually believes that some or all of modern global warming can be attributed to el Nino, is but a pipedream of some people who urgently want man-made global warming to be true.
But really my main point above, is that while factoring out the effects of eg el Nina is a sound idea, not knowing the cause/s, and how to predict, does suggest the book is still very far from being closed.
[Response: Of course. And it’s only one of myriad phenomena that aren’t completely understood. Some are reasonably well fathomed but details remain, for some we have but scratched the surface, for some the whole process is still a mystery. We are a very long way from complete understanding of earth’s climate. Mainstream climate scientists are aware of these limitations, in fact for most of them it’s what made them want to be climate scientists, so they could explore these unknowns and advance our knowledge, even if but a small step.
Claims that climate scientists believe they have all the answers and “the book is closed”, are but propaganda from those who wish to discredit them. And — be especially wary the oft-claimed fallacy that what we don’t know invalidates what we *do* know.]
“the idea that anyone actually believes that some or all of modern global warming can be attributed to el Nino, is but a pipedream of some people who urgently want man-made global warming to be true.”
Bullshit. That idea has been tried often and pushed so hard that it required a rebuttal in the peer-reviewed literature
I think Erica meant to say “who urgently want man-made global warming to be false.” If you look at the preceding comments that is likely what makes the most sense, but good link in either case!
Perhaps it’s worth noting that ENSO does emerge in many model studies, and that modeling and statistical methods of all sorts are being employed all the time to nibble away at the ENSO problem. A search of Google Scholar found 18,000+ hits for “ENSO + model”:
Lots of interesting stuff to browse in there, no doubt.
Here’s a recent paper from that search, just as an example, of work on predicting ENSO over a 6- to 16-month period. Says that ENSO response in their work at least was ‘linear’–ie., not chaotic–over that span:
I doubt anybody expects to be able to model ENSO 10 years ahead any time soon, if ever, though. For such spans, its behavior is probably no longer linear.
very interesting and I think important article but I would have liked to have seen the raw data graph in the same style as the adjusted data graph, with the lines overlaid and the same vertical scale, it would have been easier to compare the differences between them and also would not leave you open to accusations that some are bound to make
[Response: What accusations?]
I’ve taken a shot at building a student-friendly analog to the Foster & Rahmstorf model, thinking this could make an interesting and substantively important example for a chapter on time series. Any criticisms or suggestions are welcome.
Click to access temperature_model_notes1.pdf