Sharper Focus

The two major satellite-based estimates of lower-troposphere temperature, from RSS and UAH, have published their December values to complete the year 2010. In both cases the annual average for 2010 ended up a close 2nd to 1998. Some eagerly anticipate the imminent GISS value for surface temperature for December, to complete 2010, as many expect the GISS annual average for 2010 to set a new record high. But we don’t need December’s value from GISS to continue our comparison between different temperature data sets.


In our previous comparison of 3 land-based temperature estimates (GISS, HadCRUT3v, and NCDC) and 2 satellite estimates (RSS and UAH) we discovered that by and large the agreement between data sets is impressive. We also discovered that the satellite data sets differed from the ground-based data in notable ways — in particular, they show greater response to el Nino and to volcanic eruptions. We even noted that different data sets show a different residual annual cycle. And we noted that all five main global temperature data sources show about the same rate of global warming, except UAH which warms slightly more slowly.

A reader asked whether or not it’s possible to remove the el Nino influence from temperature data, since the el Nino response is one of the key differences between satellite and land-based temperature estimates. Of course we can’t remove the el Nino influence perfectly but we can do so approximately, and the same is true for volcanic eruptions and the residual annual cycle. When we remove the influence of these exogenous factors we hope to eliminate some of the variance which obscures what remains, so to bring that into sharper focus: the global warming signal, and other natural variations. Let’s do just that, for GISS, RSS, and UAH. Compare and contrast. Of course our approximation of the influence of exogenous factors is imperfect, but rather than let the “perfect” be the enemy of the “good,” we’ll remove what influence we can and see what’s left over.

For el Nino we’ll use MEI, the multivariate el Nino index, which looks like this

For volcanic forcing we’ll use Ammann et al. 2003, which is a gridded data set so it must be area-weighted and averaged to give a global estimate. To extend the volcanic data to the present, I assumed zero volcanic forcing after the end of the Ammann et al. data (December 1999). The volcanic signal therefore looks like this:

We can approximate the residual annual cycle with a 2nd-order Fourier series, and the global warming signal (since 1975 for GISS, since the beginning of data for RSS and UAH) as a linear trend.

Hence for each temperature data set, we’ll do a multiple regression of the data since 1975 (or whatever we’ve got) as a function of MEI, volcanic forcing, a 2nd-order Fourier series, and a linear time trend. We’ll allow for a time lag in the influence of MEI and volcanic forcing. Then we’ll take the original data and remove the estimated part due to MEI, volcanic forcing, and annual cycle. Finally we’ll put them all on a common baseline, using 1980.0 to 2010.0. This will give us an “adjusted” data set (a name which may give some people fits), one which is adjusted to compensate for el Nino, volcanoes, and annual cycle residue.

For example, here’s the original data for GISS, together with the model fit:

If we remove the el Nino, volcanic, and annual-cycle signals we have what we’ve called the adjusted GISS data:

With much of the variation due to exogenous variables gone, the warming trend is even more evident.

We can likewise compute a model, and adjusted data, for RSS and UAH temperatures, which produces very similar results. We can see just how similar they are by plotting them all on the same graph (click the graph for a larger, clearer view, it’s worth a close look):

In spite of their differences, the most impressive thing is their agreement. It’s also enlightening to compute annual averages for each of the adjusted data sets (the GISS annual average for 2010 is the average of only 11 months, not 12):

The agreement between the different sources is indeed impressive. Even more so, is the inexorable increasing trend in all three temperature series. This is what global warming really looks like, when we clear our view of as much of the clutter as we can. Any talk of a recent “levelling off” or even “cooling” is nonsense.

It may deserve notice that for 2010, the GISS adjusted temperature is lower than either RSS or UAH — so during 2010 it was the satellites that ran hot. With a month yet to report, it’s unlikely that the GISS December value will change that situation.

The models we used for the exogenous factors confirm that the satellite data respond more strongly to both el Nino and volcanos. The satellite response to el Nino is nearly twice as large as the GISS response, while the response to volcanic forcing is about 50% larger for the satellites than for GISS. Eliminating this difference between them has brought them much closer together.

The remaining global warming trend (+/- 2-sigma) is 0.0166 +/- 0.0026 for GISS, 0.0155 +/- 0.003 for RSS, and 0.0133 +/- 0.003 for UAH.

And for those in love with hottest years, all three adjusted data sets rank 2010 as #1, and both GISS and UAH place 2009 in the #2 slot.

80 responses to “Sharper Focus

  1. You don’t know what you’re talking about, clearly there has been a strong cooling trend as evidenced by the cold weather we have now. If we take your data and do an 180 degree in page rotation transformation about a central horizontal line, AKA WUWT style “analysis” we get a clear cooling trend !

  2. Your final graph is certainly very impressive.
    One thing you don’t mention is polar, especially Arctic, coverage, which is frequently put forward as the reason that HADCRUT3 shows slower warming than GISS, which latter has better coverage of the Arctic.
    Is there much difference between Arctic coverage with GISS, UAH and RSS and what difference would an adjustment for such a difference make to your final graph?
    Have you done the same adjustment for HADCRUT3? And does it, as would be expected, still show a slower warming?

    [Response: Satellites don’t cover the area north of 82.5N latitude.

    I haven’t done HadCRU or NCDC, but I’ll probably run them when all the data sources have reported end-of-year for 2010.]

  3. Was there really no volcanic forcing after 1999? Wouldn’t that affect the recent data?

    [Response: As far as I know, there’s no significant volcanic forcing. If there were, it would make the model cooler in the last decade, and therefore make the adjusted data even warmer.]

  4. Thanks very much, really interesting!

  5. @Tamino,

    I think you’ll find these interesting…

    My question is…what if you apply Chen’s work to Thompson’s ?

    Thompson D. W. et al. (2009). “Identifying Signatures of Natural Climate Variability in Time Series of Global-Mean Surface Temperature: Methodology and Insights”, Thompson D. W., Wallace J. M., Jones P. D. and Kennedy J. J., Journal of Climate, American Meteorological Society, Volume 22, 2009, pages 6120 – 6141, DOI: 10.1175/2009JCLI3089.1. [Available at] :-
    http://www.atmos.colostate.edu/ao/ThompsonPapers/ThompsonWallaceJonesKennedy_JClimate2009.pdf [Accessed 01 January 2011]

    Chen J. et al. (2008a). “The Spatiotemporal Structure of Twentieth-Century Climate Variations in Observations and Reanalyses. Part I: Long-Term Trend”, Junye Chen, Anthony D. Del Genio, Barbara E. Carlson, and Michael G. Bosilovich, Journal of Climate, June 2008 (Vol. 21, No. 11, doi: 10.1175/2007JCLI2011.1). [Available at] :-
    http://gmao.gsfc.nasa.gov/pubs/docs/Chen343.pdf [Accessed 01 January 2011]

    Chen J. et al. (2008b) “The Spatiotemporal Structure of Twentieth-Century Climate Variations in Observations and Reanalyses. Part II: Pacific Pan-Decadal Variability”, Junye Chen, Anthony D. Del Genio, Barbara E. Carlson, and Michael G. Bosilovich, Journal of Climate, June 2008 (Vol. 21, No. 11, pages 2634 – 2650, doi: 10.1175/2007JCLI2012.1). [Available at] :-
    http://journals.ametsoc.org/doi/pdf/10.1175/2007JCLI2012.1 [Accessed 01 January 2011]

  6. Do you have the appropriate data to easily add in more exogenous datasets? eg, the Judith Lean solar forcing dataset, or (for the time period in which it covers) the Susan Solomon stratospheric water vapor trend data? (though I’m unclear as to whether the latter has been determined to be natural or anthropogenic in nature)

    -M

    [Response: I think I tried that in the past, but the solar forcing response turned out not to be statistically significant, and had no real impact on the final result.]

  7. According to WUWT, record lows outpaced record highs by 19 to 1! In the US and Canada. In one week in December

    [Response: US and Canada — not the globe? One week in December? Wow.]

  8. Rattus Norvegicus

    Greg, you don’t even have to be as radical as that, just rotate the graphs by about 15 degrees clockwise and voila! no warming!

  9. Rattus Norvegicus

    I forgot to supply a cite for the legitimacy of this type of correction, so here it is:

    http://denialdepot.blogspot.com/2009/09/arctic-sea-ice-staggering-growth.html

  10. Very interesting. What is the residual annual cycle?

    [Response: When you compute anomalies relative to some baseline period, you also remove from the data the average annual cycle during the baseline. If the annual cycle changes, then the difference between its momentary form and the baseline average form will remain, as a “residual” annual cycle.]

  11. Tamino – have you thought about adding solar cycle as an additional regression component? Your adjusted data does appear to have a roughly decadal oscillation…

  12. LOL @ Rattus Norvegicus and link, you made my day I was just about to post something similar

  13. Do you have any plans to try to reduce the variability in the adjusted giss data to other variables? It seems to me there are at least two quite large things, namely the oil crisis of the 70s and the decline in industrial activity in Russia during the early 90s that might do that? The remaining variability might just be occasional persistent weather phenomena in highly productive ecological areas that affect the amounts of bound carbon, such as droughts and reversals of other oceanic currents. Of course this assumes the CO2 being the sole predictor of global temperature, which is definitely not true, there are solar variations, which would be most effective when the frequencies that chlorophyll uses are the highest or lowest. The Earth system has so many other variables too, such as methane ppbs, I got confused, and didn’t try those out. Thanks for spelling out that all the dominant data sets are showing the same thing.

  14. For the solar cycle, you could assume the 0.1 degree peak to trough estimate to be correct and see what it looks like (not sure of the lag). The more recent years would be even warmer given the low solar activity.

  15. I like this post, but it has always seemed to me that correcting for ENSO could be the wrong thing to do. There are two reasons for my suspicion:

    First, ENSO is (at least sometimes) defined by sea surface temperatures or things related to sea surface temperatures, so subtracting out effects on temperature of something defined by temperature seems wrong.

    Second, we don’t really know what causes ENSO, so if the frequency and strength of it changes over time, it could be related to the trend we’re trying to measure.

    Can these complaints be briefly rebutted?

  16. Wasn’t the UAH dataset revised twice this year, in addition to “problems” with the data collection. They have posted “Channel data problem beginning mid-Dec 2010, processing suspended until resolved”.
    Curiously, the result for 2010 came in just a tiny fraction below 1998. How surprising.
    Imagine the uproar among the denier crowd if it was the GISS dataset that had been tweaked repeatedly (with higher anomalies as result) and then having “problems” during the last month, before posting a record value.
    The thing to keep in mind is the following:
    According to 2008 “skeptic” predictions, UAH data should have cooled rapidly after 2008 (due to Global Cooling ) but instead UAH very nearly set a record despite being tweaked for lower anomalies, in addition to:
    The El Nino was mediocre and short lived
    The La Nina is very strong (as ironically admitted by deniers when explaining the natural cause of the Australian flooding)
    Maximum cooling effect from the deepest solar minimum in a century
    Last, but not least, the almost incomprehensible cooling power (according to “skeptics”) of the negative PDO.

    • The people at UAH decided to do all manner of things as soon as the El Niño warming started to kick in, changing the length of the average, switching to a new version, changing the baseline. Every one of those things visually softened what the high trend was showing.

      But I’ll forgive them for it because I enjoy keeping an eye on the ch05 graph on their Discover-website.

      [Response: Changing the baseline doesn’t alter the visual appearance of the graph — it only shifts the numbers along the y-axis.

      The new UAH baseline is the only 30-year period covered by the data which consists of entire *decades* (where “decade” is defined the old way, i.e., from the start of ’01 to the end of ’10). It seems to me to be a logical choice.]

      • Sure it is a logical choice. I just find the timing of all the changes peculiar.

      • I know the deniers give us plenty to be paranoid about, but let’s keep it real. When else would they change the baseline other than at the end of the post-baseline decade?

  17. Very nice… Really it is much more impressive than i thought it would be. Like Slioch, I also had the pre-conceived notion that difference in polar coverage was the major reason for the trend difference.

    Unrelated:
    I assume that the MLS fitting uses least squares. If there is a very strong trend in the predictant but not in any of the predictors, then the end of the series will always have huge squared residuals, and therefore weigh the misfit more there. For that reason it might be good to use robust fitting (e.g. minimize absolute deviations). I imagine this could be very significant for longer series (also because there is larger noise in the beginning).

  18. Regarding volcanic forcing: it might be useful to look at the related Anthropogenic sulfur dioxide emissions:
    1850–2005
    . While this doesn’t have the sharp spikes of huge volcanoes, if people look at p.16146 (p.36 of PDF), the SO2 emissions do jiggle around on a multi-year basis, including that trend reversal at the end. I don’t have data handy post-2005, and obviously, human SO2 emissions tend to have more regional effects.

  19. If one considers major regular volcanic eruptions to be the norm, as it appears to be from 1963 through 1998, would not the absence of such eruptions since 1998 lead to warmer air? It is the sunlight warming the ocean waters that stores heat, so 12 years of a clear atmosphere would result in an accumulation of heat in the oceans and a warmer air temp as it is the oceans that warm the air. No?

    • B Buckner,
      Twelve years is a relatively short time, and for the most part the effects of volcanic eruptions manifest mainly on short timescales. What matters for the long-term is the mean rate of occurrence over climatic periods (~30 years).

  20. Thanks for that John Mashey. One can look at that figure and imagine the impact on temperatures. A statistical analysis would be better of course.

    Regarding regional impacts, I’ve never seen anything regional mentioned with respect to either UAH or RSS. Am I missing out?

    • Apparently you’re not reading a wide variety of stuff. What’s up with that, Steven?

      GISS Arctic Trends Disagree with Satellite Data

      • It’s true — I don’t read very broadly. I have only enough time for high quality blogs like this one.

    • My use of regiona effects l refers to the SO2 plumes. The right sort of large volcano sends matter to the stratosphere but smaller ones and industry emissions are more local.

      I grew up North of Pittsburgh, PA, across the transition from being a steel-making center to not.
      During the 1950s, businessmen took an extra white shirt to work and changed at lunch-time. (That’s not SO2, but it gives you the idea. In downtown Pittsburgh, seeing a clear sky was rare. Twenty miles North was better.)

  21. Eyeballing the data, I get the impression that a residual oscillation with a two years period is apparent in the residual from a linear trend.

    Is there another ENSO like oscillation that could explain this pattern? Or, this is an artifact of an incomplete ENSO correction?

    [Response: The spectrum of the residuals looks like a red-noise spectrum, although there is a suggestive peak at period 3.6 years, which I’ve heard associated with el Nino.]

  22. Aslak’s question about whether robust or resistant regression techniques might give results different from least squares is something I’ve occasionally wondered about, but been too lazy (until just now) to check out. So I tried something simple, not directly addressing Aslak’s point but regressing monthly GISTEMP Jan 1975-Nov 2010 on year using three different methods. Here’s what they said.

    ordinary least squares: gistemp = -35.1 + .018year
    median regression: gistemp = -35.9 + .018year
    robust (IRLS) regression: gistemp = -35.3 + .018year

    Yet another demonstration that the trend is robust. There’s only slightly more variation between methods (slopes of .0161, .0143, .0165) if, in a contrarian spirit, we limited the analysis to the past decade. It would be easy to make similar replications of other simple or multiple regression models.

  23. Tamino,

    thanks for the post. Could you provide us with the regression coefficients and lags? In other words, if MEI rises 1 point, what does this mean to the different datasets?

    Thanks

  24. And eh, do you have a graph of the natural effects you discounted from the annual means for the three datasets?

  25. David B. Benson

    As best as I can determine, the peak in the spectrum is quite close to ~3.75 years, although some analyses state 4, 3.8 and now 3.6. This is due to a persistent Rossby/Kelvin wave in the North Pacific (Rossby) and the equator + west coast of the Americas (Kelvin). It does appear to be related to ENSO, but for that there are also longer period components, about 5–8 years but the spectrum for that is quite spread out.

  26. Not that I have any criticisms of this post because I think it is excellent, but I think that there has been a plethora of papers which have come out that suggest that the Atlantic Multidecadal Oscillation does affect global temperatures (not only a redistribution of heat) and has contributed to the warming over the past 3 decades. I believe one paper termed it IMP (ftp://www.iges.org/pub/delsole/dir_ipcc/dts_science_2010_main.pdf) and another (semenov et al. 2010) said the AMO contributed to the recent warming as well as kennyside’s work (http://meetingorganizer.copernicus.org/EGU2010/EGU2010-11829.pdf)

    I’m certainly not disputing the method you used, more or less just asking is there some way to be able to identify this contribution in the data?

  27. On behalf of those who are afraid to ask: Why is volcanic forcing shown as positive?

    [Response: Just an arbitrary choice of sign.]

  28. Thanks for this Tamino. It’s been irritating to continue to hear ‘cooling since 1998’ arguments knowing that it only works at all because a strong known natural variation (ENSO) is conveniently left out . Using a too short period, a hot spike as both cherry picked start point and ‘baseline’ the argument embodies clear intellectual dishonesty that is topped off by failing to give consideration to know natural variation.
    I suppose the question of whether there is any trend in ENSO itself arises.

  29. Oops, typo. Should have been “fail to give consideration to known natural variation.”

  30. David B. Benson

    Steve L | January 7, 2011 at 6:26 am — ENSO is always changing:
    Variability of El Niño/Southern Oscillation activity at millennial timescales during the Holocene epoch
    http://www.nature.com/nature/journal/v420/n6912/full/nature01194.html
    and there are thoughts about what causes ENSO:
    http://oceanworld.tamu.edu/resources/ocng_textbook/chapter14/chapter14_02.htm

    ENSO causes global variations in the weather so when looking for something else, Tamino had the fine idea of removing this source of variation and applied it again here. Its such a good idea I’m using it myself in a little (unfinished) project.

    • Thank you DBB for the Moy et al abstract and the chapter link. I’m glad I checked back. I think I can get what I need from the abstract and will look at the chapter when I get time. It’s good to minimize extraneous noise and investigate signal, and my reading will focus on whether ENSO is extraneous. Will report back later.

  31. Tamino, you did a graph with the calendar year adjusted temperature anomalies.

    But Nature doesn’t care about the human calendar. It will be great to see a 12-month running average, as you did a year ago with GISTEMP data.

  32. Would things change much if you used ONI instead of MEI like Lucia did in a recent post?

  33. Any chance of linking your data series in future (ie Time, Values) rather than just images? Wouldn’t mind running a few what ifs for 2011 given that there is about a 40% chance of an El Nino in 2011 based on the history of past La Nina events.

  34. Happy New Year Tamino.

    Nice analysis.

    BTW, don’t want to sound picky but it’s the multivariate ENSO index.

  35. I was wondering about this for the last couple of days and can’t quite get my head around the way to answer it.

    My concern is about the likelihood of the procedure to create artifacts. You are fitting the real data against the two forcings (ENSO and volcanic) and allowing both an amplitude and a time shift. Then you are assuming a form for the remainder: 2nd order Fourier plus linear trend. So there’s a possibility you’ll get a linear trend no matter what data goes in.

    So suppose you tried this: replace the ENSO or volcanic forcing with some randomly generated data of the similar form. How different is the result? What happens if you try random input data with or without a linear trend?

    I am not enough of a statistician to know how to assess all of it, but it seems some tests would improve the strength of the analysis.

    [Response: Your concerns are unnecessary. If you remove the el Nino and volcanic signals *without* also fitting a linear trend, then the residuals show a very strong linear trend. The trend is in the *data*, not the methodology. The same is true for the residual annual cycle, which I only included because it’s demonstrably there. I only included them all in a single fit, in order to achieve the highest precision.]

  36. I notice that there are big “dips” in the adjusted temperatures immediately after a major volcanic event, especially for 1982. How long was your volcanic time lag ? It looks like it’s about a year (hard to eyeball estimate it though).

    Is there a physical reason to believe that the impact of volcanic event should be lagged, or why it shouldn’t also have a non-lagged impact?

    There’s also the problem of having only two real events to calibrate your volcanic parameters. But there’s nothing you can do about that for the UAH and RSS data. Guess it’s time for another Pinatubo-sized event.

    [Response: Lags were chosen as those which give the best fit. For GISS and RSS the volcanic lag was 9 months, for UAH it was 8 months.]

    • Ernst K, the physical explanation would seem to be that it takes time for aerosols to be evenly spread around the atmosphere, so for instance, regarding Pinatabu:

      The eruption plume of Mount Pinatubo’s various gases and ash reached high into the atmosphere within two hours of the eruption, attaining an altitude of 34 km (21 miles) high and over 400 km (250 miles) wide. This eruption was the largest disturbance of the stratosphere since the eruption of Krakatau in 1883 (but ten times larger than Mount St. Helens in 1980). The aerosol cloud spread around the earth in two weeks and covered the planet within a year. During 1992 and 1993, the Ozone hole over Antarctica reached an unprecedented size.

      I would interpret “spread around the earth” to mean detectable amounts were showing up worldwide, and “covered the planet within the year” to mean it that within the year there was more or less an even distribution.

      So one would expect it to take time for cooling to fully set in …

      • I’m probably just nit-picking here, but I’m not really satisfied with the simple lag approach. If you look at the data tamino used, it already accounts for the time it takes to spread around the world (it’s monthly data distributed across 64 latitude bands). Nevertheless, the big drop in temperature after Pinatubo was a full year after the major eruption.

        Not that I suspect that this would have much effect on the overall results. But I think some of the noise could be reduced if a more physically realistic transformation of the volcanic data was used. But of course, it has to be simple enough that it doesn’t add a bunch of extra calibration parameters.

        Personally, I think it would be better use the average volcanic forcing over the previous n-months. I’d be tempted to use a similar approach for the MEI, but I don’t have access to even Excel on the computer I’m writing this up on. Hopefully, I’ll be able to find time tomorrow.

  37. TAMINO

    Where is my post?

    Has it gone into your “HIDE” file?

    Why hide?

    [Response: No. Your post went into the “idiot” file. I wonder whether or not you can figure out why.]

  38. Tamino,
    How did you implement a time lag in your regression? And also aren’t the MEI values bimonthly? How should they be plot then?

    Cheers

    [Response: Test all possible lags from 0 to 24 months, for both MEI and Volcanic, and use the lags which give the best fit.

    As for plotting MEI values, I suggest plotting the value at the midpoint of the time interval it covers.]

  39. Thanks Tamino! Little minor note, Bob Tisdale just posted his umm “version” of this type of analysis…

    Can Most Of The Rise In The Satellite-Era Surface Temperatures Be Explained Without Anthropogenic Greenhouse Gases?

    Talk about butchering a methodology!

    • Why does he show a satellite graph ending in 2005? Why does he exclude the poles? And that’s before I’ve read beyond the first sentence!

      What does “the linear effects of ENSO” even mean?

      Oh wow! If you limit the globe to the tropics – the part that is most affected by ENSO – then adjusting for ENSO causes artifacts corresponding to changes in the ENSO index. Wow! Someone give the guy a cigar!

      And then the rest of the post seems to be a long, drawn-out, repetitive affair intended to distract from the obvious fact that subtracting one temperature index from another will remove the global warming trend that they both have in common.

      DUH!

      I would post this at the WTF place, but it would be a waste of time. Anyone who is deceived by this isn’t looking for the truth.

    • I like the way Bob keeps subtracting off smaller and smaller regional areas until he gets the “zero” trend he wants. IIRC, he also looks only at 60S-60N. Rather leaves off a lot of area.

  40. Has anyone calculated what the overall 2010 UAH anomaly would have been if the dataset had not been adjusted (version 5.3) in the spring of 2010?

  41. Esop | January 11, 2011 at 4:44 pm | Reply
    Has anyone calculated what the overall 2010 UAH anomaly would have been if the dataset had not been adjusted (version 5.3) in the spring of 2010?

    According to what Spencer said at the time they changed the distribution of the monthly anomalies but the annual anomaly remained the same, of course the method and data weren’t documented!

  42. I’m curious to see this type of analysis applied to the stratospheric data from the MSU/AMSU and SSU instruments. I’ve seen it said on more than one occasion that Stratospheric cooling has “stopped” and that the cooling before that was “step changes” after volcanic eruptions. I did some half-assed figuring with the TLS channel, and when you remove the volcanic eruptions and smooth whats left (11 years–to remove solar cycle), temperatures drop like a rock until the mid ’90s and then cool much more gradually after that. The MSU/AMSU instruments measure the lower stratosphere whose cooling has a larger CFC component, but the SSU data goes to higher altitudes, and presumably a larger GHG component, but also have a much larger solar component. It would be interesting to remove these influences and see what remains.

    TLS is available from numerous sources, but I think Keith Shine is the go to guy on SSU:
    http://www.met.reading.ac.uk/~radiation/newref.html

    • cce,
      I believe both ozone and CFCs are more important for the lower stratosphere. The ghg signature dominates at higher altitudes. I think Chris Colose might have looked at this stuff at one point.

      • That’s my understanding as well, although I’m not sure if GHGs dominate over CFCs at high altitudes (just that their influence is greater than at lower altitudes). Somewhere I heard it was about 50:50 at 50 km. In any case, my point is that volcanic and solar influences obfuscate the cooling signal, and it would be interesting to see the data from various altitudes “cleaned up.”

  43. I believe the challenge for using a solar forcing variable is that the solar forcing is quite correlated with the volcanic forcing for this short record.

  44. Very interesting exercise. The trend becomes much more clear.

    The low peak around 1982 caught my attention. Is there any other forcing that could cause it? or is it just some spurious effect of the math?

  45. Savante,

    El Chichon erupted in 1982.

    • Hi BPL,

      I think your answer was to me instead of Savante… if so, shouldn’t such volcanic eruptions have been mathematically excluded from that graph? That’s why I speculate if that wasn’t just a glitch of the method used, maybe underrating El Chichon’s effect when filtering it out…

      • It might also be related to the estimated lags for ENSO and large volcanic eruptions (or might have some unrelated source)

        One of the largest El Nino’s (the largest by some indexes) of the last century began soon after the El Chichon eruption.

        The two acted to “cancel” one another somewhat in the global temperature.

        If there is an error in the estimate of the lags for either of the two forcings, that would presumably impact the residual — ie, lead to less cancellation of the two effects.

        A paper by Lean et al gives lags of 4 and 6 months for ENSO and volcanic eruptions, respectively (for CRU temperature data set)

        The 6 month volcanic lag used by Lean is slightly less than the 9 (8) months used by Tamino and not sure what Tamino used for an ENSO lag (Lean used the CRU data set, which might impact the lags) and Lean also considers the solar effect on long term warming (which she found to be “negligible” over the past quarter century and about 10% of the total for the last century), but the overall findings are very similar:

        None of the natural processes can account for the
        overall warming trend in global surface temperatures. In the
        100 years from 1905 to 2005, the temperature trends
        produce by all three natural influences are at least an order
        of magnitude smaller than the observed surface temperature
        trend reported by IPCC [2007]. According to this analysis,
        solar forcing contributed negligible long-term warming in
        the past 25 years and 10% of the warming in the past 100
        years, not 69% as claimed by Scafetta and West [2008]
        (who assumed larger solar irradiance changes and enhanced
        climate response on longer time scales).
        [18] In contrast with climate model simulations, the zonal

        Above is from
        “How natural and anthropogenic influences alter global and regional
        surface temperatures: 1889 to 2006”

        Judith L. Lean1 and David H. Rind2
        Received 2 June 2008; revised 1 August 2008; accepted 8 August 2008; published 16 September 2008.

  46. Tamino,

    You note the imperfection of this method of subtracting the impact of El Nino from the temperature and suggest that there is no problem with continuing with the analysis rather than wait for perfection. I accept that basic premise but what if this sort of analysis isn’t just imperfect but based on completely false assumption?

    Your analysis seems to rely on a fairly simple relationship between ENSO index and the effect of El Nino and La Nina global temperature. But there seems to be some published science that suggests that such a simple relationship doesn’t exist.

    Click to access 2008GRL-2008GL035287.pdf

    The paper above suggest he effect of El Nino are different dependant on the phase of PDO, there are more papers cited in the introduction that seem to come to the same conclusion. Given that PDO has recently moved from positive to negative phase it would suggest that the specific conclusions you come to here could be highly influenced by this insight.

    I realise other scientist do similar analysis as you have done here and I accept the idea that we have to work with the best available tools but we also need to avoid analysie that lead us down the wrong avenue. Have you got any thoughts on how well the ENSO index captures the full effect of ENSO on the climate?

    [Response: Of course no single index can capture the full effect of ENSO on climate. But the reality of the simple relationship used here isn’t assumed, it’s established.

    It’s fine to seek a deeper understanding of the relationship between different modes of variability of earth’s climate. But it’s folly to suggest that just because there’s more complexity than exists in a simple model, the simple model is somehow “the wrong avenue.” After all, there’s a hell of a lot we don’t understand about the mechanism and details of the impact of tobacco on human health — but to use that as an excuse to suggest that maybe smoking doesn’t cause cancer (which is a common tobacco-denialist tactic), is both foolish and reprehensible.]

    • The ENSO index is a tool for scientists to use. A tool that can be used for many different applications. It seems like a good tool for making short term predictions of weather. The question is just how good that tool is when applied to the methodology used here. I don’t see that it’s folly to contemplate that.

      The Wang paper suggests that the ENSO index is too simple to capture the effects of ENSO on some aspects of climate of East Asia. The introduction also mentions other studies that suggest a simililar relationship in north america and Australia.

      It strikes me your methodology relies that the relationship between ENSO index and climate is linear. These studies seem to cast doubt on that. As I pointed out the fact that PDO has been in the process of turning to a negative phase in the past few years it would seem particularly important in a study such as yours to get this fact right.

      This Wang paper doesn’t strike me as folly, a scientist contemplating the limitations of his work doesn’t strike me as folly either. Sorry I’ve repeated many of the points I made in the first post but you seem to have preferred to talk about tobacco rather than deal with the specifics raised by these publications. BTW I’m not denying that ENSO has an effect on global temperature I’m just curious about the strength of the simple linear relationship. I would be interesting to see just how the relationship has been established.

      Thanks

      [Response: Nobody here (or elsewhere that I can tell) claims that the effect of el Nino is linear. The purpose here is to remove that part of its influence which *is* linear — and nobody in his right mind will claim that the linear approximation is useless.

      It seems to me that either you’re making the classic mistake to let the “perfect” be the enemy of the “good,” or you’re just trying to insert doubt about the global warming signal — which frankly is undeniable, with or without accounting for any part of the el Nino influence on short-term variation.]

      • Ok I’ll simplify this. Your analysis relies on the fact that an el Nino in the positive phase of PDO is similar to an el Nino events in the negative phase if it’s features in an ENSO index are similar. Or at least similar enough to make little difference to the results. The Wang paper suggests that at the regional level this may not be true. In fact their words are “The contrast in ENSO’s influence between the two phases of the PDO is quite remarkable”. Quite remarkable strike me as strong words.

        [edit]

        [Response: OK I’ll simplify this for you.

        No, it is not true that my analysis “relies on the fact that an el Nino in the positive phase of PDO is similar to an el Nino events in the negative phase,” any more than linear regression relies on the existence of a trend — it tests for its presence. That the linear approximation of the impact of el Nino is a valid approximation is not an opinion or an assumption, it’s a result of the regression itself — this analysis estimates what the linear relationship is (and it turns out to be overwhelmingly significant) and subtracts what it finds. Nor am I the first to have used this approximation (by a long shot). The possible “quite remarkable” difference between different phases of PDO is an interesting topic, but in no way invalidates the usefulness of this approximation. Your repeated implication that the result is in doubt is utter nonsense, and your repeated assertion that I’ve made some kind of assumption, or that an imperfect approximation is an invalid analysis, is also nonsense.]

      • HR,
        Again, you are looking at modeling as a tool to get numbers rather than a tool to elucidate trends. Yes, I might have trepidation about applying a simple model to predict what would happen in a given locality. That is not what is being done here.
        Instead, we are looking at the global effect ENSO as it affects GLOBAL Temperature. If the model is adequate, it ought to make the true trend clearer. That is, it ought to make the overall warming trend more consistent if greenhouse gasses are a significant forcing, or it ought to decrease the trend if they are not significant. If the model is not adequate, what will likely happen is that the series will simply become noisier, since the various series are independent. When you add independent trends, you are very unlikely to get a spurious trend. This is especially true for a relatively simple model The more complicated you make the model, the more likely you are to obscure the real science.

        What this indicates is 1)the model is likely good enough for global purposes; and 2)the trend is much more likely to be the result of greenhouse forcing than it is to be due to spurious effects of ENSO and volcanic trends.

        Note that we do things that are similar to this in failure analysis when you consider a system subjected to multiple stresses.

  47. HR,
    First, there are good epistemological reasons for keeping the model as simple as possible. A simple model tends to either succeed if it is on the right track or fail miserably if it is not. Moreover, in a simple model, it is fairly easy to identify the factors that are the most important and in what combination.

    More philosophically, I think it is important to understand that the purpose of a model is not to get answers but rather to gain insight into the phenomena. If I could drive home one single point about the use of models this would be it.

    I think that Tamino’s simple model succeeds admirably in that it shows that as you add other natural forcings, the greenhouse trend becomes clearer rather than more obscured.

    • David B. Benson

      I agree with Ray Ladbury.

      “The purpose of computing is insight, not numbers.”

    • Ray

      Simple is good.

      I don’t deny ENSO has a place in climate science, it seems to do a good job in making short term, probablistic climate predictions. But as I stated above how good is it as a tool for removing the long term effects of ENSO? I’m interested in the answer to that.

      [edit]

      [Response: What long term effects? I’m not claiming there aren’t any, but I won’t brook claims that there are (or what their nature might be) until I see some evidence. Your whole line of argument is a red herring.]

  48. cce strat cooling is driven by atmospheric carbon contamination AND ozone depletion

  49. Haven’t I said that twice now?

  50. Interestingly, applying Tamino’s correction re-establishes the 1965-1996 linear trend. Global surface temperatures have been well above this trend for the past decade

  51. Having read this article:
    http://sites.google.com/site/refsdefred/warming-actors
    I was very interested to see your take on it.

    But I came away feeling slightly unsatisfied. It would be nice to include solar activity and use a longer period. But a longer period means dealing with the fact that the linear trend starts in 1975. So then you have to include an aerosol term and/or (to satisfy the skeptics) a PDO term. Messy.

    But the big problem is how do we test if we’ve done something worthwhile? ‘It looks straighter’ isn’t very scientific. This looks like a job for cross-validation to me. Like this:

    1. Leave out 10 years of data. Fit all your weights, lags, and trend to the rest of the data (e.g. 1975-2000). Then use those parameters, along with the known indices for 2000-2010 to reconstruct the rest of the temperature series. Calculate an error term (e.g. rmsd).

    2. Repeat omitting different 10-year slices, and combine the residuals.

    3. Now do the same thing for different sets of terms:
    Try just the linear trend. If the rmsd *based on the omitted data only* goes if lower with your model than the trend only, then your model has skill. (Although if that skill comes from including temperature data of a sort, it’s still cheating).
    Try including different time combinations of time series in your model. The cross validated residual gives you an impartial measure of how much you are getting, but at the same time it will penalise you if you overfit.

    I’m going to give this a try, but I don’t get much play time, so don’t hold your breath.