On Thin Ice

When it comes to sea ice, especially in the arctic, the stooges have been very busy.

It wasn’t very long ago Shemp and Moe told us that Arctic sea ice was “about to hit normal.” Curly got in on the act too. Meanwhile Shemp desperately clings to the belief that the disappearance of arctic sea ice is “more of a marketing event than a climatological event,” and seems to think he can estimate sea ice volume better than the pros. I’m skeptical.


The pros say this:

I suspect they got it right.

The PIOMAS website, in addition to providing this lovely graph, also gives access to some interesting data sets, including sea ice draft as measured by submarines. Draft is the thickness of the ice below the water, which is about 93% of the total ice thickness. These particular data only go up to the year 2000, but they still tell an interesting story about arctic sea ice. They were used in
Rothrock et al. 2008
(The decline in arctic sea-ice thickness: separating the spatial, annual, and interannual variability in a quarter century of submarine data, J. Geophys. Res., 2008, VOL. 113, C05003, doi:10.1029/2007JC004252) to study how arctic sea ice has varied over both geographic location and over time.

Each data record is the average sea ice draft over a short track (“short” meaning usually about 50 km.) as measured by submarine cruises. The geographical locations of the data values are here (the north pole is at the center of the concentric circles, the prime meridian points toward the right, the concentric circles are 5 degrees latitude apart):

I studied the data prior to reading Rothrock et al.’s analysis, and my first instinct was to model the ice thickness as a function of time, and of distance from the pole. I modeled the time dependence as a cubic polynomial, the polar-distance dependence as a quartic polynomial, and computed a periodogram allowing for those influences in order to determine the nature of the annual cycle. Although I knew this was only a rough analysis I expected to get a reasonable approximation, and it indicated that ice tended to be thicker at the pole than far from it, that over the time period covered by the data average ice draft had decreased by about 1 meter, and that the annual cycle had an amplitude of about 1 meter as well but showed no sign of higher harmonics in a Fourier analysis.

Rothrock et al. did a similar analysis. To model the spatial variation, they defined an x-y grid for the geographic area and used a 5th-degree (quintic) polynomial in x and y. For the time dependence they used a cubic polynomial and a 1st-order Fourier series for the annual cycle. They concluded that the annual cycle had an amplitude of about 1.06 m, and the average thickness declined from 1980 to 2000 by about 1.13 m. Of course this is ice draft, so the thickness is a bit larger — the annual cycle in thickness is about 1.12 m and the trend decline from 1980 to 2000 about 1.25 m.

I reproduced their results, with an interesting thought in mind. The model enables us to model the spatial and temporal variations separately, and even allows us to separate the time trend from the seasonal pattern. So I took the model for the spatial variation + seasonal pattern only, subtracted that from the data, to generate residuals which would still contain the time trend. This enabled me to compare those residuals to the time-trend part of the Rothrock model. Not only would this test the match between the actual time-trend variations (without the confounding influence of spatial variations or seasonal changes) and the model, it might reveal departures of the actual time-trend pattern from the modeled pattern.

Here’s the result, i.e., the data with the spatial and seasonal patterns removed. I’ve also superimposed 1-year averages (in blue) and the cubic time-trend model (in red):

Just for a clearer view, here’s a close-up on the 1-year averages and cubic model:

These values should reflect the ice thickness at position x = y = 0 (the pole), at their average values throughout the year, so the remaining variation should be mainly the time trend. As you can see, the cubic polynomial model (red line) gives a reasonable approximation to these variations but doesn’t tell the whole story. For one thing, there’s more variation during the 1975-1985 period than the model indicates. For another thing, the model is “leveling off” in 2000 but the data aren’t — sea ice thickness is still declining.

Another way to eliminate much of the spatial variation is to look at only the data very near the pole. Here’s the result of removing the seasonal pattern from the data at locations north of latitude 85 deg.N:

Here is a close-up on the 1-year averages and the cubic trend model:

Essentially we have the same behavior. Ice draft (and therefore thickness) has declined quite a bit, there’s more variation in the 1975-1985 period than the cubic trend model indicates, and the model indicates a leveling off about 2000 but the data don’t, sea ice thickness is still declining apace.

I can almost hear the stooges claiming that those data only go to 2000, and sea ice has probably thickened since then. Lamentably, there’s very little submarine draft measurement data available after 2000. But there is laser altimetry data from IceSat which enables study of arctic sea ice thickness during more recent years. That’s the basis of Kwok and Rothrock 2009, Decline in Arctic sea ice thickness from submarine and ICESat records: 1958–2008, GRL, VOL. 36, L15501, doi:10.1029/2009GL039035. They conclude that arctic sea ice thickness has continued to decline this past decade.

If you combine all the available information about ice thickness and ice area, you could even come up with an estimate of arctic sea ice volume. That’s exactly what the folks at the University of Washington’s Polar Science Center have done; their results are shown in the first graph of this post.

188 responses to “On Thin Ice

  1. carrot eater

    but..but.. Goddard sat there and counted pixels!

    I must say, the end issues with those higher order polynomial fits are rather less than satisfying. Good that you noted that, though.

    • DeNihilist

      Mr. Eater of Carrots’, are my eyes deceiving me, or are you and PDA actually, almost having a polite discussion/debate with some regulars @ wuwt?

      They may not believe what you’re saying, but it almost appears that some of them are actually trying to be logical.

      Is the end really that close?

      :)

  2. DeNihilist

    Thanx Tamino.

    Yes it is all about volume! This reminds me of the test you can run on young children with two glasses. One tall and thin, on short and stout. Fill up a beaker to an outlined line and fill one glass, repeat with the other. A substantial percentage of children will answer the question, “which glass has more water?” with the answer, “the tall glass.”

    Seems some don’t get beyond this level of understanding.

    • Horatio Algeranon

      You are probably aware of this (are you a teacher, by chance?), but for any who might not be, that experiment is due to the great child experimental psychologist Jean Piaget:

      Piaget noted that children younger than about 7 years had problems with this even when they were shown to start with that the volumes of liquid were the same — by first being shown two equal volumes of liquid in two identical containers (and next being shown the liquid in each container poured into the two different containers.)

      The central issue here is “conservation” and children who “failed” the above test also had problems with conservation of other things as well: number, matter, length, area.

      …and, as we all know (or at least most of us know), conservation is a key concept for all of science.

      It is actually very telling that the thing that makes children ( who are still in what Piaget called the “Preoperational stage” of cognitive development usually ages 2-7) fail that “conservation of liquid” test is that they tend to key in on only one thing at a time (eg, on the height of the container).

      In fact, they appear to be incapable of considering multiple factors simultaneously and integrating all the information.

      Essentially, laser-like focus on one single factor to the exclusion of all others.

      Does that ring any bells?

      • Don’t tell me you’ve been following the “Tim Curtin thread now a live show” thread at Deltoid…

      • DeNihilist

        “Essentially, laser-like focus on one single factor to the exclusion of all others.”

        Uh, Horatio, sounds a bit like engineers don’t it?

  3. We like logistic sigmoid curves better than polynomials, for the time fit.

  4. When you subtracted the seasonal and geographic signal, what effect did this have on the mean? That is, what is the meaning of a draft of (say) 3 metres, on your chart? 3 metres where, and at what time of year?

    [Response: At the pole (at coordinates x = 0 = y), the mean of the annual cycle.]

  5. What I’m finding really frustrating is that, while we have some very interesting stuff happening with Arctic ice, many blogs seem to be talking about what other people are talking about. This is probably the nature of the blogosphere. But what is really happening, where is the discussion of what is really happening?( Do I need to find another interest?).

    This is just a general observation, maybe I’m just looking in the wrong places or expecting too much of the blogosphere. Pardon me if I’m slightly off topic. Nice analysis of the submarine tracks tho!

  6. The equivalent of the glasses trips up everyone.

    Take two identical right angle triangles with one side ~3 to 4 times shorter than the other. Put both of them so that the right angle is on the origin, one with the longer side up on the y axis, and the other with the longer side down on the x axis. Ask anyone whether there is more area covered in the initial “pulse” or in the long tail.

    There are some real world examples of where this has fooled some very sophisticated people.

  7. Why is there a relative reduction in publicly available submarine sea ice thickness data after 2000?

  8. carrot eater

    Goddard had outdone himself, but to his readers’ credit, somebody quickly caught the silliness, and to his credit, he acknowledged it and retracted the post.

    Arctic Ice Graphing Lesson Increasing By 50,000 km2 Per Year

    So he is spared snark on this occasion.

    • We consider his intial post stupid enough to earn him a spanking despite the withdrawal. We hope this demonstration will enlighten even those who still don’t quite understand why he rescinded it.

      And with respect to volume

    • TrueSceptic

      Even a non-stats person like me could see straight away that Goddard had initially made a very silly error (I only had to imagine a single full sine wave cycle starting at zero, like this or this) but can someone tell me if the following is correct, and if not, why not?

      Assumption: we can remove a periodic oscillation by applying a moving average over that same period (12 months for the (ant)arctic ice figures) and then apply our trend).

      So, we start with this, similar to Goddard’s folly and then apply 12-month averaging like this. Am I being simple-minded with this or completely wrong? Does it matter if the averaging is the averaging is centred, leading, or trailing?

  9. watch the statistic skill of Steven Goddard demonstrate, that the arctic sea ice is INCREASING by 50000 km² per year!

    Arctic Ice Graphing Lesson Increasing By 50,000 km2 Per Year

    things don t get more bizarre than this!

  10. Anthony throws Steven under the bus:

    Arctic Ice Graphing Lesson Increasing By 50,000 km2 Per Year

    Arctic Ice Increasing By 50,000 km2 Per Year
    Posted on July 2, 2010 by charles the moderator

    By Steven Goddard

    [see important addendum added to end of article ~ ctm]

    [Note: The title and conclusion are wrong due to bias in the start/end point of the graph, the mistake was noted by Steven immediately after publication, and listed below as an addendum. I had never seen the article until after the correction was applied due to time difference in AU. My apologies to readers. I’ll leave it up as an example of what not to do when graphing trends, to illustrate that trends are very often slaves to endpoints. – Anthony]

    BTW, “the mistake was noted by Steven immediately after publication,” is Watts-speak for “Ian H pointed it out in comment #3 within maybe 15 minutes after the posting.”

    • Didactylos

      “Unskilled and Unaware of It” could have been written for “Steven Goddard”.

      And we are still left to wonder what his real name is. I feel very sorry for the dozens of people who are really called Steven Goddard. What an embarrassment!

      Looking at the article, I am amazed (but not surprised) that the sycophants applaud “Steven” for correcting the post, while making silly and unfounded comments about real climate scientists failing to do so….. when reality, as we all well know, paints the opposite view. The kicker? Despite “Steven’s” entire theory being thoroughly broken, he still sticks to his conclusion! I think he was dropped on his head as a baby.

      • Here’s another interesting one:
        Watts just put up a new article titled premature chill in the arctic

        Premature chill in the Arctic?

        If you go to the link for the map ( http://ocean.dmi.dk/arctic/satellite/index.uk.php ) and then click anomalies instead of absolute temperatures, suddenly the “chill” becomes warmth.

        [Response: This is a common practice for Watts — especially regarding the arctic.]

      • suddenly the “chill” becomes warmth.

        I think you are misreading the graph and what Watts posted. He is referring to the area immediately surrounding the ice pack. And that area *is* below normal.

      • KenM, how about that being the effect of melt-water… it’s sweet and freezes easier in winter too. Would HFITBOAW not know that?

      • To add Kenm, this is what’s meant by Robert… the anomaly!!!!!!!

        http://polar.ncep.noaa.gov/sst/ophi/color_anomaly_NPS_ophi0.png

      • Sekerob, First of all, I condemn you for forcing me to keep reading those posts. Secondly, I’m afraid I don’t know who/what HFITBOAW is, but Watts does say the temperature dip is due to melt:
        “Much of that has to do with meltwater”

      • Wow. That’s really his point? That the narrow strip of ocean next to a melting ice pack shows a negative surface temperature anomaly? Looking at the map, I’d hazard a guess that the majority of the temperature anomalies in the arctic ocean are positive, but I’ll wait until Goddard’s crack team of pixel counters gets on the case before making a definitive statement.

      • I don’t know what his point is.
        I agree – if ice melt is faster-than-normal, I’d expect negative temp. anomalies in the water immediately surrounding the ice pack. If you look at the anomaly map from ten days ago compared to today, you can see the anomaly turning more negative.

        OTOH, if ice melt is faster than normal, I’d also expect the air temperature to be positively anomalous as that energy is released. This does not appear to be the case today, so that’s a little odd I guess.

    • Don’t worry, he’ll be back. He’s done these embarrassing things before (well, actually, just about anything he posts at WUWT is embarrassing to himself). He always comes back.

  11. > Why is there a relative reduction in publicly
    > available submarine sea ice thickness data after
    > 2000?

    The various nuclear navies prefer to take a while before disclosing where their submarines have been, to avoid helping others who may have records they could correlate to improve detection; likely they’ve also been making fewer cruises under the ice in recent years.

    Remember where this started:
    http://www.nsf.gov/news/news_images.jsp?cntn_id=102863&org=NSF

  12. Dear Tamino
    Thanks for having this blog. I am learning a lot about statistical pitfalls by reading it.
    Do you have older data for ice thickness? (like 1950´s 1960´s)
    How do they fit into the graphs?

    • MS,

      I have some images here that show sea ice thickness during the 50s and 60s compared to recent.

      From:

      Kwok, R., & Rothrock, D.A. (2009). Decline in arctic sea ice thickness from submarine and ICESat records: 1958 – 2008, Geophys. Res. Lett., 36, L15501, doi:10.1029/2009GL039035.

      Rothrock, D.A., Yu, Y., and Maykut, G.A. (1999). Thinning of the arctic sea-ice cover. Geophysical Research Letters, 26(23): 3469-3472.

  13. AndrewAdams

    Maybe it’s just me but I can’t help thinking that the amount of attention devoted to the subject of arctic sea ice levels is rather out of proportion to its actual significance.
    I mean I understand that the decrease in sea ice is a physical indication of increasing temperatures which counters the line from the “skeptics” that warming has stopped, but it is only one of many signs of increasing temperatures, and it won’t have the same dire physical consequences as, say, the melting of the Greenland and Antarctic ice sheets.
    Is it because the “skeptics” insist on pointing out the supposed post-2007 recovery in sea ice levels, or is there something I am missing?

    [Response: Perhaps it’s because the decline of arctic sea ice is such an unambiguous sign, that denialists have made it their focus in a (vain) attempt to discredit what any idiot can see. (Well, maybe not *any* idiot).]

    • AndrewAdams,

      The melting Arctic also has huge consequences: permafrost melting (adding more methane and CO2 in a retroaction loop), weather on North hemisphere changing (probably more moisture in winter), possible changes in thermohaline circulation. And, a warmer Arctic accelerates Greeland melt.

      • Even at the loss levels we are experiencing now, there is warming feedback from albedo and water vapor effects–this plays into some of the points fred just made.

    • It’s fun to watch, like a horse race in extremely slow motion. Will the NW passage and NE passage both be open once again this year? Etc etc.

      As far as the attention being paid specifically at WUWT, Goddard and Watts both proclaimed full recovery this last spring when the extent level came close to the 1979-2000 average. Of course the rapid melt and the fact that the final minimum will be one of the lowest on record leaves them with egg on their face, and Goddard in particular has been left grasping at straws in his efforts to convince the loyal readership of WUWT that they were actually right a few months ago.

  14. andrew adams

    Thanks for the replies – obviously I underestimated the possible consequences of the melting sea-ice.
    It does seem though, (and I think this is what I was trying to get at) that the issue has become a kind of totem for the “skeptics” – as if somehow proving that the ice is not diminishing will make the whole argument for AGW fall apart.
    I guess that as Tamino suggests it’s because the extent of the melting and its implications for their arguments are so plain they have to try to obfuscate.

    • The others have covered the main impacts, but rather undersold the direct impact of sea ice loss on NH climate. The freeze-up of large areas of ocean in autumn and early winter affect NH weather patterns in two ways. The first is the release of large amounts of heat to the atmosphere, which winds then move onto the adjacent land. This “pulse” of heat also affects the patterns of weather in the whole NH via teleconnections, and may be responsible for the cold spells and snow of recent NH winters.
      The second effect is the release of extra moisture to the atmosphere, increasing precipitation around the Arctic. That means snow…

  15. The NSIDC (Uni Colorado) monthlies are in… looks like June was lowest on both the Extent and Area front.

    Bordering snow, particular north of Canada Ellismere/Baffin isles are also missing substantial acreage of snow cover… not helping a recovery expectation I’d day. Looking at the NP webcam, extensibe melt pooling and interconnection… it seems to have rained too.

    It ain’t true they say!

  16. PS, for snow cover visit Rutgers daily anomaly chart.

  17. Mike Allen

    Here is some interesting ENVISAT related information about the Greenland and Antarctic ice sheets http://www.esa.int/SPECIALS/Living_Planet_Symposium_2010/SEMGEUOZVAG_1.html#subhead1

  18. Quote of the day from that thread:

    “The people who visit WUWT know more and better science than the average blog reader, and that includes readers of blogs like RealClimate, climate progress, tamino, and the rest of the alarmist echo chambers — which cater primarily to the relative handful of true believers in the debunked CO2=CAGW agenda.”

    Oh my…

  19. Philippe Chantreau

    Summary of the quote: our syence is more better.

  20. Excellent.

    Do we get a badge or something like the Scouts?

    I know we should be ashamed of being ignorant unlike those other people, but still. I like the idea.

  21. Timothy Chase

    Wishing you a happy 4th, Tamino.

    Not looking that good here in Seattle, at least not for tonight’s show. Started raining about mid-afternoon and that means its fairly likely that we will be enjoying rain well into the evening.

    If it wasn’t already raining, it is more likely to start after sunset and continue for half an hour or all evening. Since it is already raining it is quite likely to do so after sunset — when the fireworks are scheduled to go off. But we will probably still have fireworks — but will have to be especially careful with the digital cameras.

    Hopefully you will have better weather.

  22. Timothy Chase

    More related to the Arctic sea ice, have you taken a look at the Atlantic Ocean heat content anomaly?

    North Atlantic basin, 0-700 meters, January through March, 1955-2009 appears to be strongly non-linear — with an R-squared of 0.93 for a quadratic trendline, and the quadratic trendline for the Atlantic Ocean as a whole is only slightly noisier at 0.91.

    Please see:

    … the graph –
    http://docs.google.com/leaf?id=0B-57vongYoiAMTE5M2FlNGYtMjg3Zi00MmNjLWFhMGMtNTE5OWM2MTFjYzU4&sort=name&layout=list&num=50

    … and the data –

    NOAA / National Oceanographic Data Center / Global Ocean Heat Content / Basin Time Series
    http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/basin_data.html

    I was wondering if you can see anything more interesting in the data.

    • Mike Allen

      Have you tried a fit using a natural log curve?

      • Timothy Chase

        It may be that I don’t quite understand — I tend to be rather good at that sometimes. (“Zathrus is very good at doings, not so good at understandings.”) However, a log fit would start off moving upward somewhat quickly as time progressed then slow down further and further as time progressed although there would be no upper limit to how far it climbed.

        Something which might make more sense would be k*(1-e^(-gt)). That would be what you would get from the ocean acquiring heat from a warmer object where the heat difference fell off in accordance with a law of exponential decay – and there would be an upper limit — the heat content at equilibrium. Although with business as usual we really don’t have an equilibrium, do we?

        In both these cases the rate at which the anomaly increases decreases with time. It would rise but at a rate that would decelerate over time. With this the heat anomaly is accelerating over time. There could of course be third and fourth terms but those aren’t showing up yet — so the coefficients on those terms are negligible at this point.
        *
        In time I would actually expect a more leisurely rise. More or less linear, I would imagine — in step with the linear rise in global temperature. Of course in time the rise in global temperature won’t be quite so leisurely assuming business as usual — but that’s another matter, I think.

        At present perhaps less and less heat content is being lost to latent heat of melting as there is less and less ice to melt. Or what ice is left (after the thin layer from winter melts off) is further and further north and thus has less of a cooling, moderating effect that way.

        Regardless, there is a strong nonlinear component, similar to the global glacial mass balance and ice loss in both Greenland and Antarctica. All of these processes are speeding up over time.
        *
        Something else that at least to me appears interesting: both the amplitude and the “period” of oscillations about the quadratic trendline seem to be increasing over time. But that may just be me, or something that is due to certain climate oscillations affecting the heat content or what have you. Can’t really say because I haven’t the tools, knowledge or expertise. But I am curious.

  23. Whilst Pielke (Jr?) a good month ago was crowing about the acute heat loss directly to space, how else he explained could the surface temps have dropped 1C, SOI per July 2, shows that half of that must have been beamed back from space again. It’s close to neutral state.

  24. I have discovered David Barber. His talk at the recent Oslo Science Conference on the weakness of Arctic ice is here:

    http://video.hint.no/mmt201v10/osc/?vid=55

    Entertaining and scary.

  25. David B. Benson

    Regarding North Atlantic heat contect in upper 700 meters, do consider the Atlantic Multidecadal Osicallation (AMO) and its connection to MOC.

    • Timothy Chase

      Sheesh! You’re right. And the AMO itself (first identified in 2001) would be defined in terms of the surface temperature once the linear trend that is assumed to be due to global warming is removed. Which basically assumes that the trend in surface temperatures in the AMO region due to global warming is itself linear.

      And then it gets complicated. AMO and Atlantic Ocean heat content aren’t the same thing but they would be strongly correlated. And at this point it would probably be good to bring in a climatologist.

  26. Tim,

    My (amateurish) research so far indicates that causality runs from the AMO to dT. It may be a sort of index of how fast heat flows from the oceans to the atmosphere every year.

    • Timothy Chase

      Understood — and I think that the emphasis that you put on the ocean and its role in global warming is well-placed and often largely absent from popular understanding. So I hope you don’t mind if I go into it in a little more depth…
      *
      The vast majority of the additional heat that gets stored by the oceans, more than 20 times that which is stored by land and the atmosphere. There is a graph for this at:

      Earth’s Total Heat Content Anomaly

      Is global warming still happening?
      http://www.skepticalscience.com/global-cooling.htm

      Moreover, when we speak of “global temperature” what we actually mean is the temperature of a thin slice of the climate system as a whole. On land this would be roughly a meter to two meters above the surface and in the ocean the first few centimeters.

      So what fluctuations we see in global temperature (as opposed to the overall trend) are largely the result of heat that is upwelling or downwelling from other layers, and primarily that which is upwelling from the oceans. And this fluctuates as the result of slow-moving currents and oscillations, such as the El Nino – Southern Oscillation. And what heat is transfered from the oceans to the atmosphere will take place primarily through the interface between the two — at the ocean’s surface.
      *
      The heat that gets transfered from the ocean to the atmosphere via land would be negligible, as would that which gets transfered via thermal radiation directly from lower layers of the ocean to the atmosphere without absorption by the first few centimeters of ocean. And the good majority of the heat that gets transfered from the ocean to the atmosphere will be through evaporation.

      So heat at the ocean’s surface is largely what drives global temperatures and if temperatures over land were neglected, this would be essentially as a matter of identity. But since land temperature (or rather, the temperature of the atmosphere near the surface) and ocean surface temperature get weighted evenly, the heat content over land gets disproportionally represented. Yet the heat content over land comes largely from the ocean via evaporation, so even then the ocean is still at the heart of it.

      Ultimately what drives global warming is the radiation imbalance at the top of the atmosphere — since energy is neither created or destroyed and the net rate at which heat enters the climate system is the net rate at which it is stored. But for us in the thin layer of atmosphere and ocean that get counted in the average global temperature, what drives global warming is primarily the ocean.
      *
      However, the Atlantic Multidecadal Oscillation is measured by means of an index. In a quasistable climate system the the AMO Index would be essentially a variation in temperature about a mean, being positive when the surface temperature is above the mean and negative when it is below that mean. But in a climate system that is undergoing a process of global warming the mean is no longer fixed, so we need to look at things a little more closely.

      We can get the monthly (raw) numbers for the index here:

      Atlantic multidecadal Oscillation Long Version (unsmoothed, based on Kaplan Sea Surface Temperature)
      http://www.esrl.noaa.gov/psd/data/correlation/amon.us.data

      … from:

      Climate Indices: Monthly Atmospheric and Ocean Time Series
      http://www.esrl.noaa.gov/psd/data/climateindices/list/

      … and we can get a smoothed graph of it here:

      Projecting the Risk of Future Climate Regime Shifts
      AMO Figure
      http://www.aoml.noaa.gov/phod/d2m_shift/amo_fig.php

      Now if we more closely at the description below the graph, we see that in a changing climate the index for the Atlantic Multidecadal Oscillation is detrended:

      Upper panel: AMO index: the ten-year running mean of detrended Atlantic sea surface temperature anomaly (SSTA, °C) north of the equator. Lower panel: Correlation of the AMO index with gridded SSTA over the world ocean (all seasons). The thick contour is zero and thin contours denote the 95% significance level.

      … and it is my understanding that by “detrended” they mean removing the trend due to global warming — that is, the trend in the surface temperature of the ocean itself that is ultimately due to the radiation imbalance at the top of the atmosphere — where in the definition of the index this trend is assumed to be linear.

      And yet we know that over the course of this century the rate of global warming (as measured in terms of thin layer of atmosphere and ocean that we are fixated on) is supposed to increase, perhaps doubling or even tripling over what it is currently. And the same is very likely to apply to the surface of the ocean. So in this sense the trend will not be linear.

      This actually constitutes a problem of sorts in how the AMO Index is defined inasmuch as you can’t neatly separate the variation in surface temperature that is due to the oscillation from the variation in temperature that is due to the process of global warming. That is, where the process of “global warming” is understood as the process whereby heat is being accumulated by the climate system due to the long-run radiation imbalance at the top of the atmosphere. That is, rather than viewing “global warming” as the rise in temperature near the earth’s surface in the thin layer that gets incorporated into our definition of the average global temperature (both land and ocean) anomaly — which is I believe how you were reading me.

  27. Fielding Mellish

    Well, the polar bears and I can rest easier after having read about Joe Bastardi’s promise that “… you will see NEXT SUMMER has the highest amount of sea ice since the early part of last decade.”

    http://climateprogress.org/2010/07/06/joe-bastardi-worst-long-range-forecaster-accuweather-global-warming/#more-29232

    Whether he means by “amount of sea ice” area, volume, or calved icebergs, his promises have high potential to be a laughingstock in the coming years. The wild-eyed statement doesn’t even bear an asterisk footnoted as “adjusted for the El Nino cycle.” What kind of operation employs a Bastardi like thati, or more appropriately, who listens to a goofy Bastardi like himi for weather forecasts? Maybe it’s sweeps week for Web site unique-hit counts; I shan’t contribute my hit to his hogwash page count. As for the broad agreement with his promise in the weatherman biz, it’ll be interesting to see whether large commercial interests adjust their plans based on his claim.

    • The problem is nobody ever takes them up on their failure to deliver. Why aren’t people, or even the press, demanding to know what happened to the “recovery” over at WUWT?

    • What kind of operation employs a Bastardi like thati?

      This kind.

      • Yeah, lharris, one of the worst bills introduced into the Senate ever. Sunk like a lead fishing weight. Accuweather’s evil. I hear their chief forecaster’s a real bastard…i …

  28. David B. Benson

    Timothy Chase // July 6, 2010 at 5:32 pm — Here is an application of (the linearly detrended) AMO:
    http://www.realclimate.org/index.php/archives/2010/03/unforced-variations-3/comment-page-12/#comment-168530

    Attempting to replace the linear detrended by a lnCO2 detrending actually doesn’t work as well as the ordinary AMO in the above link.

  29. Timothy Chase

    David B. Benson wrote:

    Timothy Chase // July 6, 2010 at 5:32 pm — Here is an application of (the linearly detrended) AMO:
    http://www.realclimate.org/index.php/archives/2010/03/unforced-variations-3/comment-page-12/#comment-168530

    It is an interesting approach for estimating the anomaly for the current decade. Haven’t had the time or concentration to look at it as closely as I would like as of yet unfortunately — been having a heat wave of sorts for the past few days that is only now breaking.

    Out of curiousity, though, are you taking into account the role of aerosols as well as carbon dioxide and deep ocean internal variability (via the Atlantic Multidecadal Oscillation or AMO)? I know that over the whole length of the 20th century all forcings other than carbon dioxide tend to cancel one-another out, but this isn’t exactly true of certain parts of the 20th century, is it? Particularly 1940-75 when due to aerosols the northern hemisphere experienced cooling/flat trends. Also, why did you pick the Atlantic Multidecadal Oscillation as a proxy for internal variability rather than the Pacific Decadal Oscillation? Despite the difference in names, the characteristic time scales are roughly the same, aren’t they?

    Also, if I remember right, the biggest effect upon global warming that deep oceans have isn’t so much the internal variability as the thermal inertia. This is what implies that even if we were to entirely eliminate our emissions today we would see virtually no effect upon the trend in global temperature for roughly 40 years. And if this is the case, then the warming that we see over a given decade shouldn’t really be a function of the “forcing” as calculated by the log of CO2 concentration at the beginning and end of the previous decade (which I believe is what you are are saying — as an approximation) except perhaps insofar as the growth rate of carbon dioxide remains roughly constant from one decade to the next.
    *
    David B. Benson wrote:

    Attempting to replace the linear detrended by a lnCO2 detrending actually doesn’t work as well as the ordinary AMO in the above link.

    That was something worth trying. However, what I am actually thinking is that under Business As Usual the rate of global warming over the length of this century is supposed to increase — as I said by roughly a factor of two to three. Its something that falls out of the models — although it differs from model to model — and is not something that falls directly out of Arrhenius’ formula.

    But in any case, what I personally would expect is that in terms of detrending the AMO so as to remove the warming signal you wouldn’t be concerned with surface temperatures but with deeper ocean temperatures since what drives the AMO given the temporal scales involved appears to be deep-ocean related. And if this is the case, then as with boreholes, it takes time for the signal that begins at the surface to reach the relevant depths, and the effects would be averaged or “smeared out over time” just as the resolution of the temperature record provided by boreholes becomes poorer the deeper down and thus farther back in time one drills.

  30. Fielding Mellish

    I’m not sure where/if this 2006 Ballantyne et al paper fits in this particular view of the Arctic, or if it’s old enough to have been discussed heavily, but here it is.

    From: http://www.newscientist.com/article/dn19155-soaring-arctic-temperatures–a-warning-from-history.html :

    “With carbon dioxide levels close to our own, the Arctic of the Pliocene epoch may have warmed much more than previously thought – and the modern Arctic could go the same way.

    Ashley Ballantyne at the University of Colorado, Boulder, and colleagues analysed 4-million-year-old Pliocene peat samples from Ellesmere Island in the Arctic archipelago to find out what the climate was like when the peat formed.”

    Paper here: http://spot.colorado.edu/~ballanta/section2/Ballantyne_PlioceneTempConstraints_Palaeo3_2006.pdf

  31. Timothy Chase

    Looking at the relationship between global temperature and both the trend in global warming due to both increased levels of carbon dioxide and the natural variability, I believe there are two different types of confusion that result from the failure to distinguish between three different levels of description.

    One form of confusion is the mistaken view that simply because two trends are strongly correlated there exists a causal relationship between the two. And by “causal relationship” I mean either one of linear causation with classical cause and effect or reciprocal causation due to positive feedback.

    If the two variables are highly correlated this may very well suggest such a causal relationship — particularly if there exists complex variation over time — such as with the paleoclimate record between variation in temperature and variation in levels of carbon dioxide. The two curves are complicated but well defined. Typically there appears to exist a lag between the two where carbon dioxide follows temperature although both 55 million years ago and 251 million years ago temperature likely followed carbon dioxide.

    However in this century what has existed is more or less a fairly simple curve where the two have been linearly correlated with time. I submit that even though the R^2 is close to 1 betwen time and ln CO2 this correlation is far less significant and far less suggestive of a causal relationship.

    A second form of confusion consists of the reification of what is simply an aspect or part of a whole into a separate entity. Consider: at one level the variations in surface temperature that we refer to as the Atlantic Multidecadal Oscillation and the Pacific Decadal Oscillation are simply variations in surface temperature over broad areas of their respective oceans. We naturally believe that they are essentially an epiphenomena of some underlying chaotic behavior in the deep ocean that may nevertheless be sensitive to its environment due to criticality. However, I believe it is a mistake to treat the variation in global temperature as something that is due in part to the AMO or PDO as causal factors.

    The reason? The variation in global temperature is simply the variation in temperature for the region of the globe that encompasses the entire globe. So if one were to state that the variation in global temperature is due to variation in global temperature for large regions of the globe this itself has no import whatsoever in terms of the underlying physical principles that are involved. The description of global variation in temperature as either variation in terms of the whole or variation in terms of the sum its constitutive parts is merely a difference in terms of description. It implies nothing in terms of the actual causal processes that are involved.

    The third level of description is the actual underlying physics itself. This is due to the absorption and emission of radiation by greenhouse gases or the AMO or PDO not understood simply in terms of the variation in ocean surface temperature but the actual deep ocean chaotic behavior that drives the variation in ocean surface temperature and thus in part the variation in global temperature.

  32. Timothy Chase

    Correction to my last comment…

    In the fourth paragraph I state:

    However in this century what has existed is more or less a fairly simple curve where the two have been linearly correlated with time. I submit that even though the R^2 is close to 1 between time and ln CO2 this correlation is far less significant and far less suggestive of a causal relationship.

    In the last sentence where I say between “time and ln CO2” time should have been temperature.

  33. I have to ask, since people here are mathematically inclined. How does one calculates anomalies based upon meteorological or proxy data if the many datasets span different time periods. This is a problem I am having because I can’t use a reference period because not enough stations have data over that same period. Is there a standardization formula? and what error would I be introducing through just using the mean of the entire data series for each individual station for anomaly calculation despite different lengths of series?

    • carrot eater

      The reference station method of GISS (http://pubs.giss.nasa.gov/abstracts/1987/Hansen_Lebedeff.html) was devised for this sort of reason. But there still has to be pretty good overlap. You start with the longest record, and add the next longest by comparing the period of overlap, and then the next, and so on.

      Many bloggers, including Tamino, have tinkered with the method, such that you don’t need a longest record to start with, but merge them all at the same time. You can see that on here, if you search back. This method would probably let you get away with more fragmentation/less overlap.

      But you have to be careful when you have really poor overlap; you can end up getting the long term trends (or lack thereof) wrong. This comes up with trying to stitch together data from different satellites, for example.

      I’m pretty sure the dendrochronology guys also have this sort of problem (a bunch of tree ring samples, none of which span the whole period, but can be stitched together using periods of overlap), so you might look into their methods as well. I’m not familiar.

  34. Timothy Chase

    Earlier I had stated:

    … I believe it is a mistake to treat the variation in global temperature as something that is due in part to the AMO or PDO as causal factors.

    The reason? The variation in global temperature is simply the variation in temperature for the region of the globe that encompasses the entire globe. So if one were to state that the variation in global temperature is due to variation in global temperature for large regions of the globe this itself has no import whatsoever in terms of the underlying physical principles that are involved.

    Lest anyone think this was original to me, Atmoz expressed essentially the same idea here:

    This implies that the mode of variability known as the PDO has the same spatial and temporal characteristics as the mean global surface temperature anomaly. The PDO doesn’t cause global warming, the PDO is global warming. (Insert all the caveats of PCA; statistical relationship not causal, linear, etc.)

    On the Relationship between the Pacific Decadal Oscillation (PDO) and the Global Average Mean Temperature
    Atmoz, 3 Aug 2008
    http://atmoz.org/blog/2008/08/03/on-the-relationship-between-the-pacific-decadal-oscillation-pdo-and-the-global-average-mean-temperature/

    (Wasn’t sure I could find it — had forgotten who wrote the piece and I was having difficulty bringing it up in my custom search engine.)

  35. Gavin's Pussycat

    Robert,

    another approach would be to introduce a vector of unknown offsets, one for each station, and solve these values iteratively. Start by setting them all to zero, and average the station temperatures together year-by-year, producing an all-station average time series.

    Then you compute, station-by-station, the mean offset over all available years from the average time series (from the previous step), and set the offset vector element for that station equal to minus this value.

    Repeat, lather, rinse until convergence.

    • carrot eater

      that’s roughly along the lines of what Tamino did. I’d link to that, but I’m having trouble browsing back to the posts from Jan/Feb. I’m not very good at the internet, apparently.

    • I think the method described by Gavin’s Pussycat (hilarious name by the way) would work if not for the inclusion of some very cold stations into my analysis from the 1990s to the 2000s. Wouldn’t their inclusion in the corresponding mean series affect the mean so much that it would create large offsets for the remainder of series? Perhaps my statistics is not exactly to a point where i’m so knowledgeable but I think that the inclusion of multiple sites (3) with MAATs near -7 would affect my total composite (on average MAAT near -1 and 15 sites per year) to the point where it would affect the offsets? Would it perhaps make more sense just to calculate each series with a standardization formula and then combine them?

      • carrot eater

        I don’t follow. Each station gets its own offset (or rather, one for each month at each station). They’ll be whatever they need to be, to get all the stations to overlay on each other. Doesn’t matter if one station is on the sun. That station will just end up being assigned a large offset. So what?

      • How does one deal with the situation whereby a station does not overlap with the reference station in terms of offset calculation?

      • Gavin's Pussycat

        Robert,

        the station will always overlap with the average time series computed from all stations. You compute the offset for that station as (minus) the average difference over all months in that overlap.

      • Gavin,
        I understand completely what you mean. One final question. Should I do as tamino did and combine the raw stations 1 at a time (i.e.) take longest record, compute offset with second longest, then combine the two and compute offset between the avg of those 2 and the third series and so on. Or would it be better to average all the series at the start and then compute the offset from each series, combining the end result of all the offset calculations?

      • carrot eater

        What you’re describing sounds like what GISS does (read the 1987 paper, or at least the two relevant pages). There, you start with the longest station, then stick on the second longest, and so on, to create a growing combined set that would hopefully overlap with the shorter records when you get to the end. Though they require 20 years of overlap for any combination; you’ll get a noisy mess if you slap things on with too little overlap.

        Tamino computed all the offsets simultaneously.

  36. Ahh Okay, thank you TimothyChase,
    Those posts clarify things for me big time. I guess my only option is to calculate offsets and then apply them and combine the raw temperature data first because not all of my stations have 10 years at least covering a standard anomaly period. However Gavin’s idea of making an all-station average and then computing offsets from the average, then subtracting from each series and combining does make sense to me too. I do know that i’m not a programmer so am unlikely to be able to put something together which calculates the offsets which give the minimum sum of squares between the stations. Any ideas on what would be preferable for me?

    • Timothy Chase

      While I may be a number of things — some of which are no doubt rather uncomplimentary — I am not a statistician.

    • Timothy Chase

      Anyway, if you would like, taking the mathematics as a given, I could probably write code to perform the calculations. In a variety of languages, actually. It isn’t principal component analysis — and I could probably handle the coding even if it were. It isn’t a climate model, so I don’t think the calculations would be all that intensive, either.

    • Gavin's Pussycat

      Robert, I don’t think you have to do the least-squares thing. Just iterate: after you have obtained your first set of offsets, re-average the offset-corrected station values, and use the recomputed average to generate inproved offsets for the stations. And so on, until convergence. Which I expect will be fairly fast.

      For curiosity I used a similar technique on Baltic tide gauge data back in 1988 :-)

      • Hello again,

        What I ended up doing was I took a reference station’s raw MAAT data (One with most data) then I calculated the offset between that one and the second longest. Applied the offset to the second longest records. Then Averaged the Reference and the new off-set adjusted station data. I then used the new averaged reference data for the next station and continued that process. I think that method is appropriate although im not 100% sure. Does this fit in with what was previously said or am I way off?

      • correction ** I then used the new averaged reference data for offset calculation the 3rd longest record and continued like that.

      • Hey long time no comment with respect to my attempt at this temperature stuff. I went through and tried to use your method. *after you have obtained your first set of offsets, re-average the offset-corrected station values, and use the recomputed average to generate inproved offsets for the stations. And so on, until convergence. Which I expect will be fairly fast.**

        but forgot to iterate. I was wondering how do I know once to stop iterating?

    • Gavin's Pussycat

      Looks like a valid technique Robert. The only “problem” I see with it is that the sum of offsets will not be zero… but then, the temp anomalies have no absolute level anyway.

      BTW did you remember to use weighted averaging? E.g., if you average a monthly value from the third longest station with the existing average of values from the longest and the second longest stations, you should weight 1:2. You need to keep track of how many station values went into every intermediate average in the calculation.

      I believe your method is close to what GISS uses.

      • I was actually wondering whether I had to do something to compensate for the amount of stations being included. It appears I will have to redo my analysis. Lucky me. I’m not quite sure how to do a proper weighted average in excel actually. I figure I can just take the average and make two of it (ex (1.04 + 1.04 + 1.08)/3 )
        but that might get annoying by the time i’m getting up in station numbers.

        Also, how do I deal with the issue of the sum of offsets not being equal then? Is there a better way to do this? (staying away from least squares or using an anomaly period initially?)

      • Another thing about using the weighted averaging is I will have to keep track of how many values were used per month/year too which might be difficult

      • Another thing to consider is if I have a couple months/years with gaps then i have to compute the weighted mean differently for that period

      • Gavin's Pussycat

        Robert, So you’re using Excel… not what I would choose or recommend. But anyway. I suppose you have somewhere on your worksheet an array (column?) of average temperatures by month (row index). You should create a second column containing integers n, telling how many individual values the average for that month is based on. When updating the average, the corresponding count should be incremented.

        So where you now compute av = av + st, you should
        instead do av = n*av + st; n = n + 1.

        About the sum of offsets being nonzero, ignore it for now. At the very end you can reference your result to 1961-1990, e.g.

      • Hello Gavin,
        Yes unfortunately I am using excel. I am not a scripter by nature so until I pick up some more matlab knowledge this will have to do for me I guess. Imagine now, the one GIS guy who doesn’t know code…

        Anyways.

        What I currently have is a matrix with the Station Number at top going from 1 to 34 (34 stations in the analysis) and the Years On the side going from 1880 to 2009. Filling the columns is Mean Annual Air Temperatures. But if I figure out how to implement this well then I will move on to monthly data.

        I could combine all the data into one series which would be the MAAT for every year and just have a column representing the total number of stations which were used for constructing that portion. That’s obviously no problem.

        I do have to ask two things though. Av refers to the temperature average correct? If so, the st would be the new station series adjusted by the offset between the two im assuming?

      • Gavin's Pussycat

        OK Robert, I think you’re getting this about right. Yes, my av is your MAAT, an extra column by the side of your table, one value per year. And then you need a row, one value for each station, containing the offset values. These are computed from both that stations’s data column (which I called “st”) and the MAAT column. Then you re-compute MAAT by re-averaging over stations, taking the offsets into account.

        I see no reason why this shouldn’t work. I think your stepwise update approach would be only slightly more complicated.

      • Hello Gavin,
        Thank you for all your help.
        I think I must of done something wrong there. I Took my MAAT, and Subtracted the overlapping periods between the average maat and station 1 data. Then I averaged the result to get the average offset. Then I adjusted the original station 1 data by adding the offset to each value. The problem is I thought the averages between the overlapping periods of the MAAT and the new adjusted station 1 data were supposed to be the same by they’re off by 0.15 in one attempt and about 0.2 in another.

      • Gavin's Pussycat

        yep, you must be doing something wrong. Did you check the obvious? In computing the average offset, are you dividing by the number of common stations? Etc. Split up the computation into parts to see where you go astray. Do it manually if needed.

        Excel is a bitch to debug :-(

      • Apparently it wasn’t me doing something wrong but just excel being an absolute pain. I think the things went fine but my result is not exactly what I was expecting. I was expecting at least somewhat of a global warming trend and there really is one with a warming of under 0.5 degrees over the last 120 years which isn’t exactly much for a high latitude region. Regardless. Do you think that there is any difference by doing it the current way (comparing offsets to the MAAT average) compared to taminos method of combining the reference to the next longest and so on? Thanks for the help

      • Robert wrote (July 19, 2010 at 1:12 am):

        Apparently it wasn’t me doing something wrong but just excel being an absolute pain. I think the things went fine but my result is not exactly what I was expecting. I was expecting at least somewhat of a global warming trend and there really is one with a warming of under 0.5 degrees over the last 120 years which isn’t exactly much for a high latitude region.

        Not that surprising assuming you are speaking Celsius.

        Looking back (July 16, 2010 at 4:19 pm) I see that you are using 34 stations, all in the United States I presume:

        What I currently have is a matrix with the Station Number at top going from 1 to 34 (34 stations in the analysis) and the Years On the side going from 1880 to 2009. Filling the columns is Mean Annual Air Temperatures. But if I figure out how to implement this well then I will move on to monthly data.

        The United States constitutes only 1.5% of the world’s area, and for some reason (Jim Hansen suggested it might have something to do with the Atlantic Multidecadal Oscillation at one point if I remember correctly) it is showing a much more subdued warming trend relative to the rest of the globe. For example 1934 is tied with 1998 and 2005 for warmest temperature in the contiguous states. And while I haven’t calculated the trend, the five year mean goes from about -0.25 to less than +0.1 as of the early 1990s.

        See:

        Figure 4: (a) Global Temperature (Land-Ocean Index); (b) US Temperature [contiguous 48 states]

        … of:

        Global Temperature Trends: 2007 Summation
        http://data.giss.nasa.gov/gistemp/2007/

        Of course, even for the United States things have warmed up a bit from the early 1990s to 2009. But then you also mentioned that the small population you are working with include “some very cold stations.”

        July 12, 2010 at 2:22 pm

        I think the method described by Gavin’s Pussycat (hilarious name by the way) would work if not for the inclusion of some very cold stations into my analysis from the 1990s to the 2000s. Wouldn’t their inclusion in the corresponding mean series affect the mean so much that it would create large offsets for the remainder of series?

        The solution? Include more stations, I presume.

        Here is a post by Tamino that may be of interest:

        Hit you where you live
        Tamino, January 11, 2008

        It shows the warming (land, 1880-2007) by latitude (EQU-24N, 24N-44N, 44N-64N, 64N-90N) — but does not limit itself to the continental United States.

        Incidentally, I have been telling people how globally we have been breaking 12-month temperature records March, April and May, but I have been getting some strange looks. I live in Seattle, and for the most part we have been unseasonably cool even though the rest of the country (less than 1.5% of the globe) has been broiling, although we had a heat wave that lasted for about a week just recently.

      • After Considering my composite and so on I am left wondering whether the aforementioned method is the superior method. My problem originates from this method being seemingly quite easy which is a good thing but yet if it was that easy then why is the rsm and simple anomaly method used instead of calculating the offset from the MAAT of all the stations and then correcting for the offset? It seems to me that method would alleviate the problems of a common reference period but yet it is not used by any of the global reconstructions that I saw?

  37. Nick Dearth

    So many gems, from Mr. Goddard, to choose from:

    “stevengoddard says:
    July 13, 2010 at 11:52 am
    Jeff P

    I am talking about rates of sea level rise. If the rate of ice loss has doubled, then sea level rise would also have to double.”

    • One thing about “Goddard” has always impressed me: his ability to squeeze multiple errors into one short sentence. It’s like a two-for-one deal on insanity!

    • I’m actually getting sick of engaging him over there with respect to glaciology. He actually has no clue and its frustrating.

      • carrot eater

        Walt Meier is stepping up to the plate.

      • and hits a homerun… by the way, mosher’s analysis is interesting too

      • ….and hilarity ensues in the comments, with Goddard in his usual mode of being a hyper-aggressive asshole who completely misses the point.

      • Well, for what its worth, you have my respect engaging with him. But just remember, he has no regard for facts or scientific truth, he is playing to the audience, and usually “wins” through fatigue.

        It’s just a business to him.

      • What are his credentials anyways?

      • carrot eater

        Does it matter? He’s judged by what he produces.

      • No it doesn’t matter what his credentials are, I just don’t know who the guy is, like he seems like a programmer but his pretty good with all that visual arts stuff

      • carrot eater

        If I had to guess, I’d say older engineer of some sort, who doesn’t know what he does and does not understand, and gets very upset when somebody points that out.

      • A few weeks back Goddard mentioned his degree was in geology, but it has been years since he was in a university and he’d made his living in the mining industry.

        None of this seems surprising, does it? I’m sorry I can’t find his exact post amid the very long and numerous threads. Please don’t make me read them again. (Though your comments on glaciation in the real world were little islands of sanity in a sea of don’t ask. Thanks for taking the trouble, Robert. )

      • t_p_hamilton

        I think this saying has some truth: Never argue with an idiot – he will drag you down to his level and beat you with experience.

    • I’ve now decided the best way to engage is to post through skeptical science where at least I can have the time to show the figures I like. Tomorrow I should have up a nice refutation of goddard’s so called “prowess” in glaciology.

  38. Well certainly I would love to have a program which could alleviate my problems. It would save me a lot of work as this thread has made me realise I now have to re-do my entire manuscript on the subject. I am not a coder or a statistician but the principals of the method don’t seem terribly difficult from the way I see it. If you did decide to do something like this I would be very grateful but with my limited coding experience i’d say that would limit the language to some sort of excel macro in VBA or Matlab or something. Anyways, let me know what you think and thanx for all the help previously.

    • Timothy Chase

      Excel macro certainly works for me. Actually that is how I got started programming. Typically though what I do is use Excel vba code to control spreadsheet calculation. Took one automated caclulation from 40 minutes when I got there to 35 seconds by the time I left at WSDOT, another from 4 minutes to 2 seconds to at Boeing. Its neck-and-neck with compiled VB6 — as, oddly enough, is VBA.

      Anyway, wouldn’t charge — at least as long as the project isn’t too hairy, if you are comfortable with spreadsheet formula I could likely write it using a very generic, extensible approach where it wouldn’t be necessary to ever touch the actual code if you were to include more calculations.

      Email address is: timothy chase at g mail dottish com, no spaces.

  39. OT, sorry, but important. Monckton has posted at WUWT asking for people to flood John Abraham’s university with calls for disciplinary action. As a consequence, I have posted this:

    We the undersigned offer unreserved support for John Abraham and St. Thomas University in the matter of complaints made to them by Christopher Monckton. Professor Abraham provided an important public service by showing in detail Monckton’s misrepresentation of the science of climate, and we applaud him for that effort, and St. Thomas University for making his presentation available to the world.

    If you support Abraham, please visit Hot Topic and leave a comment in support.

    http://hot-topic.co.nz/support-john-abraham/

    • Don’t worry, things like Monkton’s organized stalking. Universities have a small round filing cabinet on the floor that they store such correspondence in.

  40. John Mashey

    In light of the recent, but disappeared discussion of the horrors inflicted upon Watts, note that the Viscount’s message at WUWT was not a thread-post, but A GUEST POST:

    Abraham climbs down

  41. Fielding Mellish

    The University’s apt and concise response to Discount Monkey:

    http://rabett.blogspot.com/2010/07/gold-amongst-dross.html

  42. David B. Benson

    Timothy Chase // July 10, 2010 at 2:47 pm — I use decadal averages for everything, not begin and end points.

    The AMO is known (or at least suspected) to be related to MOC, hence ocean heat uptake. Its QPO “period” is about right for the global temperature record unexplained by lnOC2. The PDO has too short a period and is anyway to small to explain much on the long time scales I am using.

    As I stated in the notes, the assumtion of the study is that all other forcings cancel out, as IPCC AR4 WG1 states. This is not an attribution study, but rather something fairly simple that many can grasp (after some study, it seems).

    Granger causality, according to PBL, agrees with the known physics. (1) CO2 causes global warming; (2) ocean heat uptake, being variable, modulates that.

  43. Gavin,
    I was just thinking about the first approach you offered.

    And if I follow correctly I would calculate an average time series for all stations across all years (like MAATs per year, im using yearly right now). Then I would Calculate the offsets between each station and the MAAT average. Then I lose you with the setting the offset vector element and minus it for the station.

    Sorry about all the questions. I know it can be annoying. I’ve just been working on a temperature composite for about 4 months and must of tried 20 different methods now to get things right so i’m getting a bit frustrated.

  44. Tony O'Brien

    As of today the sea ice extent area does not look too bad. However, when one looks at the satellite pictures you see fractured and chopped up ice through most of the pack.

  45. Oceans Apart? Part I of II

    David, I’m sorry I didn’t respond sooner to your comment of July 15th – I didn’t see the comment until earlier today.
    *
    David B. Benson wrote (July 15, 2010 at 6:47 pm):

    I use decadal averages for everything, not begin and end points.

    I understand that, but presumably you are using the previous decade for calculating the current. Am I right?

    In any case your explanation — which you give elsewhere isn’t terribly clear about what it is that you are doing. At one point you are reduced to giving a numerical example of a calculation without stating what the actual calculation is. Likewise you also have a real fondness for acronyms, but little or no concern for spelling out what those acronyms mean.

    This is a bad habit. I have seen it among those who use acronyms to cover the vacuous nature of their arguments — to intimidate those who might not know what the acronyms mean and who might avoid looking them up — partly as the result of anxiety that they experience as the result of the intimation. Since I doubt this is your intent, I would recommend spelling out at least in your comment what an acronym means — preferably when you first use it — if you are going to use it at all. If not for me (I will just go ahead and look things up anyway) then at least for readers who might not otherwise follow along.

    *

    You wrote (July 15, 2010 at 6:47 pm):

    The AMO is known (or at least suspected) to be related to MOC, hence ocean heat uptake.

    Related? I would most certainly agree: as a dialectician I believe that truth is a unity because reality is a unity, and therefore everything that is true and everything that exists is related in one way or another. The real question is: related in what way? For example, would the Atlantic Multidecadal Oscillation be the cause or the effect of the Meridional Overturning Circulation? Effect, it would seem — since the Atlantic Multidecadal Oscillation is defined in terms of surface temperatures in the North Atlantic. Then again, there is both an Atlantic and a Pacific Meridional Overturning Circulation, and both are simply parts (albeit important parts) of the Thermohaline Circulation. Which brings us back to the question of why you chose to use the Atlantic Multidecadal Oscillation rather than the Pacific Decadal Oscillation.

    *

    David B. Benson states regarding the Atlantic Multidecadal Oscillation (July 15, 2010 at 6:47 pm):

    Its QPO “period” is about right for the global temperature record unexplained by lnCO2. The PDO has too short a period and is anyway too small to explain much on the long time scales I am using.

    Its Quasi-Periodic Oscillation “period” is about right? I agree. However, as I stated (in the comment July 10, 2010 at 2:47 pm that you were responding to):

    Also, why did you pick the Atlantic Multidecadal Oscillation as a proxy for internal variability rather than the Pacific Decadal Oscillation? Despite the difference in names, the characteristic time scales are roughly the same, aren’t they?

    For the Atlantic Multidecadal Oscillation:

    The Atlantic Multi-decadal Oscillation (AMO) is a mode of natural variability occurring in the North Atlantic Ocean and which has its principle expression in the sea surface temperature (SST) field. The AMO is identified as a coherent pattern of variability in basin-wide North Atlantic SSTs with a period of 60-80 years.

    Atlantic Multi-decadal Oscillation
    http://www.cgd.ucar.edu/cas/catalog/climind/AMO.html

    Elsewhere I have seen 50-100 years. And here is yet another set of figures, albeit from a bit more informal a source than the first:

    Michael E. Mann, associate professor of meteorology and geosciences, Penn State, and Kerry A. Emanuel, professor of atmospheric sciences, MIT, looked at the record of global sea surface temperatures, hurricane frequency, aerosol impacts and the so-called Atlantic Multidecadal Oscillation (AMO) — an ocean cycle similar, but weaker and less frequent than the El Nino/La Nina cycle. Although others have suggested that the AMO, a cycle of from 50 to 70 years, is the significant contributing factor to the increase in number and strength of hurricanes, their statistical analysis and modeling indicate that it is only the tropical Atlantic sea surface temperature that is responsible, tempered by the cooling effects of some lower atmospheric pollutants.

    Climate change responsible for increased hurricanes
    Tuesday, May 30, 2006
    http://live.psu.edu/story/18074

    (emphasis added)

    There are two different characteristic time scales (or “periodicities”) associated with the Pacific Decadal Oscillation. The shorter one is (apparently) where its name comes from, but the longer one is of roughly the same length as the periodicity of the Atlantic Multidecadal Oscillation.

    From the article I referenced later (in my comment July 12, 2010 at 5:39 am):

    Sometimes, it’s said that the PDO has a characteristic time scale, hence the word decadal in the acronym. The UW website states that “Shoshiro Minobe has shown that 20th century PDO fluctuations were most energetic in two general periodicities, one from 15-to-25 years, and the other from 50-to-70 years.” To evaluate this, we can look at a wavelet analysis of the PDO with trend derived in the first part of this post.

    On the Relationship between the Pacific Decadal Oscillation (PDO) and the Global Average Mean Temperature
    Atmoz, 3 Aug 2008
    http://atmoz.org/blog/2008/08/03/on-the-relationship-between-the-pacific-decadal-oscillation-pdo-and-the-global-average-mean-temperature/

    (emphasis added)

    The longer of the two periodicities associated with the Pacific Decadal Oscillation is roughly of the same length as that of the Atlantic Multidecadal Oscillation: 50 to 70 years. So now that we have put some numbers on it (more or less), why the Atlantic Multidecadal Oscillation rather than the Pacific Decadal Oscillation?

    And insofar as the short term variability of global temperatures is largely a function of ENSO it would seem that I could make a fairly strong argument for the decadal variability in average global temperature being more of a function of Pacific Decadal Oscillation than the Atlantic Multidecadal Oscillation.

    From NOAA:

    It is generally agreed that a “cold” phase of more frequent La Niñas and fewer strong El Niños occurred from 1890 to 1924 and again from 1946 to 1976. “Warm” phase of more frequent, longer, and stronger El Niños occurred from about 1925 to 1946 and again from 1976 to 1998. During the 1976 to 1998 episode, cool conditions were observed in only 98 of 266 months. During this same warm phase of the PDO, both the equatorial and northern North Pacific oceans experienced two very large El Niño events (1983–1984 and 1997–1998).

    Glossary, Section: The Pacific Decadal Oscillation (PDO)
    http://www.pacificstormsclimatology.org/index.php?page=glossary

  46. Oceans Apart? Part II of II

    However, I am not making that argument. My argument is that insofar as the Atlantic Multidecadal Oscillation and Pacific Decadal Oscillation are nothing more nor less than variations in temperature over large parts of the globe their warm or cool phases are not what cause short-term variability in global temperature – they are short-term variability in global temperature.

    As I stated later (July 11, 2010 at 6:33 pm):

    However, I believe it is a mistake to treat the variation in global temperature as something that is due in part to the AMO or PDO as causal factors…. The description of global variation in temperature as either variation in terms of the whole or variation in terms of the sum its constitutive parts is merely a difference in terms of description. It implies nothing in terms of the actual causal processes that are involved.

    And as I pointed out still later (July 12, 2010 at 5:39 am), Atmoz made essentially the same argument roughly two years ago, albeit only with respect to the Pacific Decadal Oscillation:

    This implies that the mode of variability known as the PDO has the same spatial and temporal characteristics as the mean global surface temperature anomaly. The PDO doesn’t cause global warming, the PDO is global warming. (Insert all the caveats of PCA; statistical relationship not causal, linear, etc.)

    On the Relationship between the Pacific Decadal Oscillation (PDO) and the Global Average Mean Temperature
    Atmoz, 3 Aug 2008
    http://atmoz.org/blog/2008/08/03/on-the-relationship-between-the-pacific-decadal-oscillation-pdo-and-the-global-average-mean-temperature/

    *

    David B. Benson wrote (July 15, 2010 at 6:47 pm):

    As I stated in the notes, the assumption of the study is that all other forcings cancel out, as IPCC AR4 WG1 states. This is not an attribution study, but rather something fairly simple that many can grasp (after some study, it seems).

    “… not an attribution study, …”

    I made it clear in the comment you are responding to that I knew that you weren’t speaking of attribution, but pointed out that at least with respect to the mid-century cooling/leveling-off, aerosols obviously did play a very big role in terms of the deviation from the more or less linear trend in global warming.

    I stated (July 10, 2010 at 2:47 pm):

    Out of curiousity, though, are you taking into account the role of aerosols as well as carbon dioxide and deep ocean internal variability (via the Atlantic Multidecadal Oscillation or AMO)? I know that over the whole length of the 20th century all forcings other than carbon dioxide tend to cancel one-another out, but this isn’t exactly true of certain parts of the 20th century, is it? Particularly 1940-75 when due to aerosols the northern hemisphere experienced cooling/flat trends.

    Assuming you are attempting to explain short-term variability (in contrast to simply the linear trend) then aerosols matter — and you can’t simply argue that all forcings other than anthropogenic carbon dioxide cancel out and then attribute the short-term variation (that we actually know is due to aerosols) to the Atlantic Multidecadal Oscillation.

    *

    David B. Benson wrote (July 15, 2010 at 6:47 pm):

    Granger causality, according to PBL, agrees with the known physics. (1) CO2 causes global warming; (2) ocean heat uptake, being variable, modulates that.

    Granger causality doesn’t concern itself with the underlying physics:

    Clearly, the notion of Granger causality does not imply true causality. It only implies forecasting ability.

    Eric Zivot and Jiahui Wang (2006) Modeling financial time series with S-PLUS, Volume 13, pg. 407

    … and as I pointed out, the trend in ln CO2 has been roughly linear, and with the exception of the mid-century speed bump caused by aerosols, the trend in warming has been roughly linear.

    I stated (July 11, 2010 at 6:33 pm):

    However in this century what has existed is more or less a fairly simple curve where the two have been linearly correlated with time. I submit that even though the R^2 is close to 1 between temperature and ln CO2 this correlation is far less significant and far less suggestive of a causal relationship.

    … and the correlation becomes far less significant when you limit your analysis to the decadal level — and take into account the presence of red noise.

    Furthermore, as the lag in global temperature relative to the spike in the equatorial Pacific Ocean during an El Nino event might suggest, ocean surface temperature isn’t symptomatic of ocean heat uptake as the release of heat into the atmosphere by the ocean.

    *

    To leave this on a somewhat more positive note, earlier I had stated (July 11, 2010 at 6:33 pm):

    Consider: at one level the variations in surface temperature that we refer to as the Atlantic Multidecadal Oscillation and the Pacific Decadal Oscillation are simply variations in surface temperature over broad areas of their respective oceans. We naturally believe that they are essentially an epiphenomena of some underlying chaotic behavior in the deep ocean that may nevertheless be sensitive to its environment due to criticality.

    … and you likewise have brought up the Meridional Overturning Circulation on more than one occasion. More broadly we may speak of the thermohaline circulation — that varies over time, e.g., becoming weaker or stronger. Its behavior would appear to be largely responsible for both the Atlantic Multidecadal Oscillation and the Pacific Decadal Oscillation, and given their quasi-periodic behavior its behavior would appear at least in part to be chaotic. Furthermore, if important elements in this circulation are close to criticality it should be sensitive to its environment.

    Given this, I would like to end with a quote I have brought up on more than one occasion but that now seems especially pertinent:

    A crucial question in the global-warming debate concerns the extent to which recent climate change is caused by anthropogenic forcing or is a manifestation of natural climate variability. It is commonly thought that the climate response to anthropogenic forcing should be distinct from the patterns of natural climate variability. But, on the basis of studies of nonlinear chaotic models with preferred states or ‘regimes’, it has been argued, that the spatial patterns of the response to anthropogenic forcing may in fact project principally onto modes of natural climate variability. Here we use atmospheric circulation data from the Northern Hemisphere to show that recent climate change can be interpreted in terms of changes in the frequency of occurrence of natural atmospheric circulation regimes. We conclude that recent Northern Hemisphere warming may be more directly related to the thermal structure of these circulation regimes than to any anthropogenic forcing pattern itself. Conversely, the fact that observed climate change projects onto natural patterns cannot be used as evidence of no anthropogenic effect on climate. These results may help explain possible differences between trends in surface temperature and satellite-based temperature in the free atmosphere.

    Signature of recent climate change in frequencies of natural atmospheric circulation regimes
    S. Corti, F. Molteni, and T. N. Palmer
    Nature 398, 799-802 (29 April 1999)
    http://www.nature.com/nature/journal/v398/n6730/abs/398799a0.html

  47. Christopher Monckton and other deniers get far more press coverage than they deserve. Journalistic false balance has caused the public to be confused on climate change – the greatest threat to humanity this century. Worse, these deniers have used mainstream media to attack climate science and the scientists who pursue the truth. Let us now turn the tables.

    Monckton has been exposed by Dr. John Abraham and instead of hiding his tail and whimpering away, Monckton has gone on the offensive by attacking Dr. Abraham and asking his followers to essentially “email bomb” Dr. Abraham’s university president. We need to alert the media to this story.

    I have assembled a list of 57 media contacts in the hopes that my readers will follow my lead and send letters asking for an investigation of Monckton and his attack on Abraham. I have placed mailto links that will make it easy to send letters to several contacts at once with a single click.

    In the thread comments, please suggest other contacts in the US and from abroad. This blog thread can then be used in the future to alert the media to denialist activity.

    Turn the Tables on Monckton

  48. David B. Benson

    Timothy Chase // July 17, 2010 at 9:25 pm — In
    http://www.realclimate.org/index.php/archives/2010/03/unforced-variations-3/comment-page-12/#comment-168530
    all acronyms are introduced. Please correct me if I’m wrong. Yes, I just apply one decade’s lnCO2 to the following decades temperature; as the notes mention a longer delay would probably be a bit better but directly checking a two decade delay doesn’t work quite as well as one.

    While correlation is not causation, it is strongly suggestive of it, especially if there is a one-way Granger causality. In any case, lnCO2 is based on the physics.

    AMO explains the mid-century cooling as due to a change in MOC rate. Indeed, looking more carefully at CO2 concentrations you will discover these went down; look more carerfully at the linked study to see how far from “roughly linear” lnCO2 actually is over the 13 decades in question.

    To be blunt about it, the PDO is unlikely to be a good indiex of internal variability based on the known locations and magnitudes of deepwater formation. Since lnCO2+AMO explains all but random noise, there is no need to use what would offer only a minor contribution, be it PDO or aerosols.

    But don’t take my word for it. The linked study should explain enough for you to replicate it, including whatever variations you might desire; replace the AMO by the PDO and see what you find.

    By the way, MOC is he preferred term for thermohaline circulation these days; it is exactly the same thing.

    S. Corti, F. Molteni, and T. N. Palmer is rather more spohisticated that you seem to think; the atmospheric physics of CO2 is well understood and explains most of the variance of the past 13 decades; AMO explains almost all the rest. The linked study is not an attribution study because I use results found in IPCC AR4 WG1 and elsewhere, as the notes indicate.

    Finally, you seem to have some complaint (that I don’t understand) about the way the linked study is written. Since I am currently revising my copy to include a liitle bit about a persistent linear trend, this would be a good time to consider other revisions. However, this study is not intended to be in textbook style, despite comments from others that it is “8th grade” and “(college) freshman”. Ity is neither, since I obviously assume knowledge of RMS and R^2 not to mention autocorrelation.

  49. Dialectic, Part I of III: Thesis

    David B. Benson wrote (July 20, 2010 at 12:20 am):

    In http://www.realclimate.org/index.php/archives/2010/03/unforced-variations-3/comment-page-12/#comment-168530 all acronyms are introduced.

    Yes, you have done a fairly good job of spelling out what your acronyms mean over there. But consider: if someone is trying to follow what you are speaking about over here and the going off of one of your more recent comments they will not have the link to the material over there — unless they go digging. As a courtesy to your reader you should spell out what the acronyms mean in each comment unless the comments in which they are used are nearly a continuous set of text.

    Secondly, looking over there your explanation of at least one term could be the source of considerable confusion: you speak of the GISTEMP global temperature anomaly product (GTA) when what you mean is the GISTEMP’s Global Average Temperature Anomaly. Yes, it is a product — a product of a calculation based upon data, but then again any quantity that is calculated from data could in this way be refered to as a “product.” But generally they aren’t — as this would be superfluous.
    *
    David B. Benson wrote (July 20, 2010 at 12:20 am):

    Yes, I just apply one decade’s lnCO2 to the following decades temperature; as the notes mention a longer delay would probably be a bit better but directly checking a two decade delay doesn’t work quite as well as one.

    While correlation is not causation, it is strongly suggestive of it, especially if there is a one-way Granger causality.

    When Barton Paul Levenson (or “BLP” as you refered to him) looked for the correlation between temperature and the log of CO2 concentration (ln CO2), he was at least looking at individual years. And looking at individual years he had quite a scatter in his scatter plot. Linear? Sure. But the temperature was more or less a linear function of time, and the log of CO2 concentration was more or less a linear function of time.

    As I have pointed out at least twice so far such a correlation isn’t especially significant for the reason that the R^2 of two straight lines will be 1 but this doesn’t imply any causal relationship between the two and isn’t particularly significant. Each could be a function of time and the correlation simply spurious.

    Now as Barton calculated things he came up with an R^2 of 0.764. I get the same thing. But what happens if we remove the linear trend in both temperature and the log of CO2 concentration, leaving us with only the residuals? Calculated over the same period of time, the R^2 for the residuals at the yearly level is only 0.328. Now I have heard that if you get an R^2 of 0.5 that may very well be a good correlation. Maybe the same is true of 0.328. But it is certainly far less significant than 0.764. Don’t you think?

    And once you limit your analysis to the decadal level you go from having roughly 130 data points to having only 13 data points. So if you manage to get a high R^2 even if you were to calculate R^2 not in terms of the variables of temperature and ln CO2 but in terms of the residuals, will this mean the same thing as a high R^2 for 130 points?

    Obviously not — because by this logic an R^2 of 1 using just two points would be just as significant as an R^2 of 1 with 130 points. But any two points will necessarily given you an R^2 of 1.

    However, your calculation at the decadal level makes things much worse because by looking at the decadal level you are left with far fewer points, and roughly linear growth in both quantities summed over the length of a decade, the correlation will be far less significant.

  50. Dialectic, Part II of III: Antithesis

    David B. Benson wrote (July 20, 2010 at 12:20 am):

    While correlation is not causation, it is strongly suggestive of it, especially if there is a one-way Granger causality. In any case, lnCO2 is based on the physics.

    Now yes, we know as a matter of physics that there is a relationship between temperature and the log of CO2 concentration, but the effect of increased carbon dioxide will not be instantaneous and there is no reason to expect it to be linear on the decadal scale — particularly if the rate ln CO2 increased were not a linear function of time.

    Third, where do you get OGTR from? YOu call it an Observed GISTEMP Response, but what is estimated as the response to a doubling of carbon dioxide is more in the neighborhood of 3.0 C, not 2.28 C. They answer presumably is that we aren’t seeing a third of the heating because of the effects of reflective aerosols. But that isn’t something that is actually given in your explanation, is it?
    *
    David B. Benson wrote (July 20, 2010 at 12:20 am):

    In any case, lnCO2 is based on the physics.

    If it is based upon the physics then why did Barton have to go through the elaborate process of trying to determine what R^2 is for ln CO2 vs. temperature? He could have simply said that we know as the result of physics that the two are closely related and have been done with it.

    Of course we know that the warming that is the direct result of forcing due to an increase in the concentration of carbon dioxide should be proportional to the log of the concentration. But do we know that it should be proportional after the feedbacks?

    Well, yes, but not so simply. It is something that falls out of the models. It is also something that we can more or less conclude based upon studies of the paleoclimate data. However, both you and Barton presumably weren’t basing your reasoning upon the models or paleoclimate studies, were you? What each of you was trying to do was provide an independent line of argument that people could more easily follow.

    But you were bootstrapping your argument, sneaking in things that you were only warranted to conclude as the result of other far more elaborate forms of evidence and argumentation. Namely paleoclimate studies and ensembles of runs of climate models.
    *
    David B. Benson wrote (July 20, 2010 at 12:20 am):

    AMO explains the mid-century cooling as due to a change in MOC rate.

    … and given the way that the Atlantic Multidecadal Oscillation is defined the first part of this amounts to little more than a tautology.

    The Atlantic Multidecadal Oscillation itself is essentially defined as the variation in temperature over a given, large region of the globe minus the linear trend. If instead of “a given, large region” we simply substituted “the surface of the globe” it would be a tautology. Empty. Meaningless. Having nothing to do with the actual physics involved. And the same is true of the Pacific Decadal Oscillation.
    *
    David B. Benson wrote (July 20, 2010 at 12:20 am):

    Indeed, looking more carefully at CO2 concentrations you will discover these went down; look more carerfully at the linked study to see how far from “roughly linear” lnCO2 actually is over the 13 decades in question.

    I believe that by looking at the residuals I did essentially that. And the same argument would apply regarding going from 130 data points to only 13 or 2.

    *

    David B. Benson wrote (July 20, 2010 at 12:20 am):

    To be blunt about it, the PDO is unlikely to be a good index of internal variability based on the known locations and magnitudes of deepwater formation.

    First you argued that the Pacific Decadal Oscillation couldn’t be a good index of internal variability because the periodicity was too short. I pointed out that the periodicity of the Atlantic Multidecadal Oscillation is perhaps 50-70 years and that there are two periodicities associated with the Pacific Decadal Oscillation, 15-25 years and 50-70 years. Now you argue on the basis of deep water formation.

    Now consider:

    The AMO signal is usually defined from the patterns of SST variability in the North Atlantic once any linear trend has been removed.

    http://en.wikipedia.org/wiki/Atlantic_multidecadal_oscillation

    Now lets look at where deep water formation takes place — on the second map at this page:

    The Thermohaline Ocean Circulation
    A Brief Fact Sheet – by Stefan Rahmstorf
    http://www.pik-potsdam.de/~stefan/thc_fact_sheet.html

    I see four places: two up by Greenland and two in the Southern Ocean.

    What this would seem to imply is that the deep water formation directly involved in the Atlantic Multidecadal Oscillation occurs essentially at two points: to the south of Greenland and just to the east of Greenland. But there are two other major regions for deep water formation, both in the Southern Ocean, one well to the west of the West Antarctic Peninsula and one just to the east of the West Antarctic Peninsula.

    Now do you have studies that show “more” deep water formation occurs in the two spots by Greenland? If so, what are the units of measurement? Volume times height? Volume times change in temperature? Are you taking into account salinity? And can you honestly argue that greater deep water formation occurs there in that narrow part of the Atlantic than the whole of the Southern Ocean. Given the strength of the circumpolar circulation? Look at the fourth diagram in Rahmstorf’s page and tell me where you think the most deep water formation takes place. (But yes, be sure to consider the distortion due to the use of a Mercator map.)

    Furthermore, I had already argued that El Nino (or to be more precise, the El Nino-Southern Oscillation, or “ENSO”) clearly has a strong effect upon global temperature. Furthermore, it would appear that the Pacific Decadal Oscillation has a strong effect upon ENSO, such that when the Pacific Decadal Oscillation is in its warm phase El Ninos happen more often, tend to be stronger and tend to last longer whereas the cool phase of the Pacific Decadal Oscillation has the opposite effect. So on the face of it, this would seem to suggest that it is the Pacific Decadal Oscillation that modulates natural variability rather than the Atlantic Multidecadal Oscillation. That is, if we had to choose between the two. But as I have argued this is a false alternative.

  51. Dialectic, Part III of III: Synthesis

    David B. Benson wrote (July 20, 2010 at 12:20 am):

    Since lnCO2+AMO explains all but random noise, there is no need to use what would offer only a minor contribution, be it PDO or aerosols.

    Regarding PDO, judging from the work of Atmoz the linear trend plus the Pacific Decadal Oscillation pretty much has the whole of temperature variation in the 20th century explained. But for the fact that he argues this is nearly tautologous.

    Please see:

    On the Relationship between the Pacific Decadal Oscillation (PDO) and the Global Average Mean Temperature
    Atmoz, 3 Aug 2008
    http://atmoz.org/blog/2008/08/03/on-the-relationship-between-the-pacific-decadal-oscillation-pdo-and-the-global-average-mean-temperature/

    And aerosols? A minor contribution? Not judged by the forcing that is generally attributed to aerosols — according to the IPCC. And I believe that the good majority of climatologists would argue that aerosols were responsible for the cooling/no trend from 1940-70. Here Tamino is expresses his view — which is simply the mainstream view:

    People often wonder why the planet didn’t warm from 1944 to 1975. Denialists often say that the planet actually cooled for 30 years or more, but this is simply not so; the cooling was confined to a brief period (about 1944 to 1951), followed by relative stability for several decades. But the question remains, with man-made CO2 (and other greenhouse gases) in the atmosphere, why did the planet not warm for several decades mid-century?

    The answer is that during that time, the warming from man-made greenhouse gases was offset by the cooling from man-made aerosols.

    Hemispheres
    Tamino, August 17, 2007

    Are you saying that you know better?
    *
    David B. Benson wrote (July 20, 2010 at 12:20 am):

    By the way, MOC is he preferred term for thermohaline circulation these days; it is exactly the same thing.

    Perhaps, but then why was Rahmstorf refering to the “thermohaline circulation” rather than the MOC?
    *
    David B. Benson wrote (July 20, 2010 at 12:20 am):

    S. Corti, F. Molteni, and T. N. Palmer is rather more spohisticated that you seem to think; the atmospheric physics of CO2 is well understood and explains most of the variance of the past 13 decades; AMO explains almost all the rest.

    More sophisticated than I think? I presume you mean that in some wholy incommunicable way this means that they support your views. If you think that they somehow support your views please at least try to explain how.

    Yes, I believe they are quite sophisticated. If I understand them correctly there is a natural variability where natural forcing projects onto modes of climate variability, but likewise anthropogenic forcing more or less projects onto the same modes of climate variability. Thus the anthropogenic forcing due to anthropogenic greenhouse gases and anthropogenic aerosols will result in climate variability that is largely indistinguishable from what would occur due to solar variability.

    This is I believe what is meant by the second of these two sentences:

    It is commonly thought that the climate response to anthropogenic forcing should be distinct from the patterns of natural climate variability. But, on the basis of studies of nonlinear chaotic models with preferred states or ‘regimes’, it has been argued, that the spatial patterns of the response to anthropogenic forcing may in fact project principally onto modes of natural climate variability.

    Furthermore, looking at Rahmstorf’s article linked to above it would appear there is significant positive feedback in the thermohaline circulation:

    As mentioned above, highest surface densities in the world ocean are reached where water is very cold, while lower densities are found in the saltier but warmer tropical and subtropical areas. In this sense the THC is thermally driven. Nevertheless, the influence of salinity is important and is what causes the non-linearity of the system. This was first described in a classic paper by [10] with the help of a simple box model. Salinity is involved in a positive feedback: higher salinity in the deep water formation area enhances the circulation, and the circulation in turn transports higher salinity waters into the deep water formation regions (which tend to be regions of net precipitation, i.e., freshwater would accumulate and surface salinity would drop if the circulation stopped).

    … which implies just the sort of self-organized criticality that would make the system sensitive to its environment. Such that the thermohaline circulation might rapidly adapt to either natural or anthropogenic forcing. But with regimes one could likewise have just the sort of stepwise behavior that seems to have occured in the change in the rate at which warming has occured (e.g., the inflection point around 1915, 1944, 1951 and 1975) — or the regimes which appear to have existed with regard to hurricanes.

    Please see:

    North Atlantic Storms (NATL TC)
    Tamino, August 10, 2007

  52. TC,

    Two series appearing to be correlated because both are rising is known as the “spurious correlation problem.” As noted in my page on the ln CO2-dT relationship, I accounted for it. After Cochrane-Orcutt iteration, I still get 60% of variance accounted for–not 33%.

    Your “removing the trend” procedure is introducing a statistically unjustified extra factor on the assumption–the ASSUMPTION–that the mutual trend is a coincidence. Yes, once you removed part of the relationship, less is left.

    Duh.

  53. Barton Paul Levenson wrote (July 20, 2010 at 7:52 pm):

    Two series appearing to be correlated because both are rising is known as the “spurious correlation problem.” As noted in my page on the ln CO2-dT relationship, I accounted for it. After Cochrane-Orcutt iteration, I still get 60% of variance accounted for–not 33%.

    I will check it out and likely stand corrected on that point. Out of curiosity, how “significant” is an R^2 of 1 for two points? Would you regard an R^2 of x for 13 points as “significant” as an R^2 of x for 130 points?

    Finally, what do you think of Benson’s arguing that the trend in 20th century temperature is dependent upon forcing due to carbon dioxide but independent of forcing due to aerosols and entirely accountable (but for noise) but forcing due to carbon dioxide and variability due to the Atlantic Multi-decadal Oscillation?

    Anyway, thank you for joining in.

  54. . . .I’d also expect the air temperature to be positively anomalous as that energy is released. This does not appear to be the case today. . .

    Well, the dmi data is for 80 degrees and northwards, while the “edge” of the pack is farther south (though it is surprising how much open water there is in high latitudes, according to both MODIS imagery and to CT–the latter, for example, shows huge swatches of concentrations around 75% that I don’t remember seeing in previous years.)

    The Russian side of the Arctic has been really toasty this week, actually, and the Canadian side at least seasonally warm–though my impression is that the minima are mostly above seasonal norms.

    I’ve noticed that denialists love that dmi data this summer, presumably because it’s “cold.” The funniest moment for me came when one fellow argued that that was appropriate since most of the ice that wouldn’t melt by the end of the season was located there!

    Silly me, thinking that temps where the ice was actually melting were more relevant. . .

    • Oops, in my browser at least that last comment was physically displayed far from the comments I was responding to.

      They came in the July 20 subthread with kenm & sekerob, kicked off by Robert at 11:51 AM–in case anybody is trying to follow.

  55. Barton Paul Levenson,

    Is the following something that you would be able to agree to…?

    David B. Benson wrote (July 17, 2010 at 9:25 pm):

    To be blunt about it, the PDO is unlikely to be a good indiex of internal variability based on the known locations and magnitudes of deepwater formation. Since lnCO2+AMO explains all but random noise, there is no need to use what would offer only a minor contribution, be it PDO or aerosols.

    (emphasis added)

  56. Click to access rahmstorf_eqs_2006.pdf

    “Although the terms THC and MOC are often
    inaccurately used as if synonymous, there strictly
    is no one-to-one relation between the two. The
    MOC includes clearly wind-driven parts, namely
    the Ekman cells consisting of the transport in the
    near-surface Ekman layer and a return flow below it. And a direct contribution of wind-driven currents even to the large-scale, deeper overturning is being increasingly discussed. On the other hand, the THC is of course not confined to the meridional direction; rather, it is also associated with zonal overturning cells. Hence, care should be taken with the terminology: the term THC should be reserved for a particular forcing mechanism, e.g., when discussing the influence of cooling or freshwater forcing on the ocean circulation. The term MOC should be used when describing a meridional flow field, e.g. from a model, which most often will show a mix of both wind-driven and thermohaline-flow.”

  57. TC,

    I do think it’s better to use individual years than decades, since using the latter smears out some of the variance. It’s also true that high R^2 is less significant the smaller your sample size. In practice, however, his conclusion may be right–when I do a multiple regression of dT on many causes including aerosols, sunlight, etc., usually only CO2 and the AMO account for significant variation. (Not no variation, but large amounts thereof.)

  58. Barton,

    I can’t even get his global average temperature anomaly to come out right. Taking 1880-1889 Jan-Dec gives me -27.1, 1880-1889 Dec-Nov gives me -26.8, 1881-1890 Jan-Dec gives me -28.2, 1881-1890 Dec-Nov gives me -27.9. Same thing for each decade no matter how you calculate it. The GTA for a given decade does not match up with his using NASA GISS land and ocean. Maybe he was using a different set?

  59. Barton Paul Levenson wrote (July 21, 2010 at 10:47 am):

    I do think it’s better to use individual years than decades, since using the latter smears out some of the variance. It’s also true that high R^2 is less significant the smaller your sample size. In practice, however, his conclusion may be right–when I do a multiple regression of dT on many causes including aerosols, sunlight, etc., usually only CO2 and the AMO account for significant variation. (Not no variation, but large amounts thereof.)

    Not asking what is the best fit at this point, although as I have pointed out I can’t get even his decadal averages to work out using GISS Land and Ocean.

    Rather, he specifically states (July 20, 2010 at 12:20 am):

    AMO explains the mid-century cooling as due to a change in MOC rate. Indeed, looking more carefully at CO2 concentrations you will discover these went down; look more carerfully at the linked study to see how far from “roughly linear” lnCO2 actually is over the 13 decades in question.

    To be blunt about it, the PDO is unlikely to be a good indiex of internal variability based on the known locations and magnitudes of deepwater formation. Since lnCO2+AMO explains all but random noise, there is no need to use what would offer only a minor contribution, be it PDO or aerosols.

    So forcing due to CO2 matters, but forcing due to aerosols does not. Physically does that make any sense? And he isn’t simply talking correlation at this point, either, as he is stating, “AMO explains the mid-century cooling as due to a change in MOC rate.” He is actually positing a physical mechanism in terms of the MOC rate as an alternative to one based on reflective aerosols where his mechanism is based upon what could be refered to as an “internal forcing.”

    So when he states that aerosols make only a “minor contribution” (at best) he quite literally means that either their forcing is negligible or their forcing doesn’t enter into the equation with the same weight as forcing due to carbon dioxide. The mid-century cooling is, in his view, almost entirely due to the Atlantic Multidecadal Oscillation, or rather the MOC, and the mainstream view that aerosols are what explain mid-century cooling is simply wrong.

  60. Trying to follow through Benson’s “study” as presented in…

    http://www.realclimate.org/index.php/archives/2010/03/unforced-variations-3/comment-page-12/#comment-168530

    I have already raised issues regarding the fact that I am unable to “replicate” his average decadal temperature given the individual annual temperatures, but there are other issues pertaining to replicability.

    When he states:

    The formula as applied is, for each decade d,

    AE(d) = k(lnCO2(d-1) – lnCO2(1870s) – GTA(1880s)

    … obviously there is a missing closing parenthese. I assume what he actually means is:

    AE(d) = k(lnCO2(d-1) – lnCO2(1870s)) – GTA(1880s)

    … as it wouldn’t make much sense to do it with the closing parenthese this way:

    AE(d) = k(lnCO2(d-1) – lnCO2(1870s) – GTA(1880s))

    … but this begs the question, “How many other typos are we going to have to go through in order to replicate ‘his study’?”
    *
    Next issue…

    Going a little further down, he states:

    Although linearly detrended, the AMO for the 13 decades of interest has an average of -0.014 which is removed for this decadal study. Our formula to account for this internal variabilty, in addition to lnCO2, is

    AEP(d) = AE(d) + AxAMO(d)

    where A and k are estimated for best fit to GTA data.

    What is the A that he is referring to? It can’t be the A in AEP, AE or AMO since those are individual variables (estimated temperature anomaly based on Arrhenius and Atlantic Multidecadal Oscillation, estimated based on Arrhenius only, and Atlantic Multidecadal Oscillation), so I have to guess it is either A in Ax or simply Ax – but once again with a typo. Since there is no x it must be Ax.

    Now what of k? It doesn’t occur in that equation. Instead we have to go back to:

    The final term is the adjustment for the way GISTEMP anomalies are reported and there is a constant k to give the temperature change due to the forcing by lnCO2. This constant is traditionally reported for 2xCO2, so

    k = (OGTR for 2xCO2/ln(2).

    OGTR stands for Observed GISTEMP Response and is estimated to be 2.280 K.

    Note that once again a closing parenthese is missing — although it seems fairly obvious that it must go after the ln(2)).

    So moving along, if k is estimated, is it estimated by NASA, by Benson strictly on the basis of CO2, or by Benson on the basis of CO2 and AMO? After all, if OGTR is estimated by NASA for a doubling of CO2, then k is simply the OGTR for a doubling of CO2. But he had stated “where A and k are estimated for best fit to GTA data.” Based on that one sentence it would appear to be estimated simultaneously with A (or rather Ax) for the “best fit to GTA data.”

    Now if it is by NASA, we need to know where that comes from. We need a reference. If it is by him strictly on the basis of CO2 we need to check the calculation. But in any case k will be fixed as far as the equation AEP(d) = AE(d)+AxAMO(d) is concerned. And if k is not fixed in relation to the equation for AEP(d) then AE(d) is no longer fixed despite the appearances to the contrary.

    But as he is saying “best fit to GTA data” this suggests that the “best fit” for k is the best fit is the best fit as calculated by Benson, either relative to the equation for AEP(d) or AE(d). And AE(d) makes the most sense at this point since the equation AEP(d)= AE(d)+ AxAMO(d) suggests that AE(d) has already been calculated.

    So presumably it is for this equation:

    AE(d) = k(lnCO2(d-1) – lnCO2(1870s) – GTA(1880s)

    … or rather (given the missing closing parenthese):

    AE(d) = k(lnCO2(d-1) – lnCO2(1870s) – GTA(1880s))

    … and unless told otherwise I believe this is the interpretation we should go with.

    Now he does not state where he is getting his CO2 concentrations from, and at least prior to the 1950s these concentrations would have to be estimated, most likely on the basis on cO2 bubbles in ice. But as he is going off of NASA GISS for his temperature anomaly figures I would assume that he is getting his annual CO2 concentrations from NASA as well. But as he is attempting to build upon your project it would seem reasonable to go back to check the CO2 concentrations that you are basing your calculations off of. You give Law Dome for 1880-1958 and Mauna Loa for 1959-2007. You also give annual temperatures. Could this be where his decadal average temperatures are coming from? Using your figures (which differ from what is given by NASA GISS), for 1880-1889 I have -18.7 and for 1881-1890 I have -19.6, neither of which gives us his decadal average of -27.5. So apparently not — and his decadal temperature anomalies remain a mystery.

    Setting that aside, we need to calculate the table suggests that we need to calculate lnCO2(d)-lnCO2(d-1) (that is, “the diffs” in the log) in order to calculated AE(d) as a function of CO2 concentration. But this will be CO2 concentration for the decade, not the year. So when calculating lnCO2(d) where d is the decade, are we averaging the CO2 concentrations for the individual years or are we averaging the lnCO2s for the individual years? And of course are we speaking, for example, of 1880-1889 or 1881-1890? Averaging lns, for 1880-1889 we get 5.6861 and for 1881-1890 we get 5.68. Performing the same set of calculations for 1890-1899 and 1891-1900, then taking the differences we get 0.00741 and 0.00675. Rounded either one of these could be the source of 0.007 Averaging CO2s (or rather the annual concentrations of CO2) and then calculating the decadal logs, we get for differences 0.007389 and 0.006736. Rounded, either one of these could be the source of the 0.007. So we haven’t eliminated any one of the four possibilities.

    But setting those issues aside, looking at his earlier decadal table he is showing diffs between sucessive decades where he states that the diffs are for the lnCO2s are calculated at the decadal level. However, the equation for AE(d) isn’t calculated in terms of the diffs for successive decades but between the current decade and the base decade. So why is the table giving us the diffs for consecutive decades rather than the diffs between the current and base decades? And we know that the diffs are for consecutive decades rather than current and base since the difference is going down, not up. Between current and base would go up over time (as CO2 concentration goes up), not down.

    Just a few of the difficulties I am having trying to make heads or tails of his calculations. Of course I could try and do my own calculations based on GTA, CO2 and AMO, except it makes about as much sense to me as reading chicken entrails (forcing is forcing, and if either solar or CO2 forcing counts then aerosol forcing counts, too, except when you are doing Benson’s calculations), and if I showed that GTA cannot be estimated by means of CO2 and AMO he could simply claim that it was because I calculated it the wrong way because it wasn’t his way. In any case, obviously I am missing one key piece that would make everything fall into place since this is as he puts it, “something fairly simple that many can grasp (after some study, it seems).”

  61. Wow. Just wow. Goddard has a new post up at WUWT claiming that “temperature anomalies are plummeting” and “now [the ‘warmists’] seem to have lost interest in satellites.”

    How does that guy, and his publisher, sleep at night?

  62. David B. Benson

    Timothy Chase // July 21, 2010 at 4:40 pm (and other posts) — The missing parenthsis obviously goes as

    AE(d) = k(lnCO2(d-1) – lnCO2(1870s)) – GTA(1880s)

    as simple dimensional analysis will demonstrate.

    The decades begin with the year enidng in 0 and end with the yearr ending in 9, as should be completely obvious. The “x” in AxAMO means multiply, as in grammar school.

    Both k and A are estimated together, as I thought I made clear.

    AMO won’t measure all of deep water formation, just that in the North Atlantic. There is no decent index for use in the Southern Ocean, but it obviously doesn’t matter. As for aerosols from 1940 to around 1980, AMO will be affected by those as well as MOC; I made that clear, but you didn’t read carefully enough. Of possibly greater interest in the subsequent growth in ABC, Asian Brown Cloud. In any case, you are attempting to read too much into what is intended to simplify, avoiding the (misplaced) reductionism all too common in climatology.

    On of the advantages of using decades is that both ENSO and the solar cycle are largely averaged out; simplicity is a virtue. As for a relationaship between PDO and ENSO, AFAIK this is merely a correlation (which you disparage) and for which not even Granger causality has been established.

    Of greater concern is the (small) possiblity that my program miscalculates decadal averages. I’ll check and report later.

    For a study using yearly averages, see Tol, R.S.J. and A.F. de Vos (1998), ‘A Bayesian Statistical Analysis of the Enhanced Greenhouse Effect’, Climatic Change, 38, 87-112.

    Anyway, you various misconceptions regarding the linked study have certainly been an eye-opener for me.

  63. How does that guy, and his publisher, sleep at night?

    With the devil? :)

  64. David B. Benson

    Timothy Chase // July 21, 2010 at 1:10 pm — Indeed, using the GISTEMP J-D column for 1880–1889 produces, by hand, -0.275 K, as advertised.

    However NASA, via NCDC, keeps fiddling with the numbers, so with a more recent copy of the file you might obtain slighly different figures; matters not.

  65. David B. Benson

    Hank Roberts // July 21, 2010 at 10:40 am — Thanks for the link and the quote threfrom. A recent review paper by Carl Wunsch with a more junior colleage now at Harvard certainly didn’t make that distinction.

    Or I misread it, always a possiblity.

    In any case, AMO is thought to be a result of changes in MOC, according to the NASA geophysical lab web site.

  66. Seeing More Deeply, Part I: Minor Issues
    Section 1: The first missing parenthese

    David B. Benson wrote (July 21, 2010 at 11:46 pm):

    The missing parenthsis obviously goes as

    AE(d) = k(lnCO2(d-1) – lnCO2(1870s)) – GTA(1880s)

    as simple dimensional analysis will demonstrate.

    … this would be the interpretation that I took when I raised the problem (July 21, 2010 at 4:40 pm):

    I assume what he actually means is:

    AE(d) = k(lnCO2(d-1) – lnCO2(1870s)) – GTA(1880s)

    … as it wouldn’t make much sense to do it with the closing parenthese this way: …

    … but as I also suggested, typos in formulas don’t instill confidence in the math.

    Section 2: Multiplying Confusion

    David B. Benson wrote (July 21, 2010 at 11:46 pm):

    The “x” in AxAMO means multiply, as in grammar school.

    When used in an algebraic formula where variable names use more than one letter it helps to put spaces on both sides of the x — if it is to be interpretted as a multiplication sign. Otherwise it can all too easily be interpretted as an additional letter, perhaps even a subscript of the preceding multi-letter variable name — and a space after the x improves readability. “x” without the spaces at least in my experience is common with arithematic, not algebra. (Of course alternatively you could use the asterisk. Much more common nowadays.)

    As I pointed out before (July 21, 2010 at 4:40 pm), the following:

    k = (OGTR for 2xCO2/ln(2).

    OGTR stands for Observed GISTEMP Response and is estimated to be 2.280 K.

    … suggests that k is fixed prior to the calculation of AEP(d) by means of a formula in which it does not directly appear:

    AEP(d) = AE(d) + AxAMO(d)

    where A and k are estimated for best fit to GTA data.

  67. Seeing More Deeply, Part II: Major Points
    Section 1: Going South

    David B. Benson wrote (July 21, 2010 at 11:46 pm):

    AMO won’t measure all of deep water formation, just that in the North Atlantic. There is no decent index for use in the Southern Ocean, but it obviously doesn’t matter.

    The Southern Ocean to the west of the West Antarctic Peninsula is essentially an extension of the South Pacific. The Southern Ocean just to the east of the West Antarctic Peninsula is just to the east of this. So one of the two regions of major deep water formation in the Southern Ocean is well within what may still be considered the South Pacific Ocean and the other just outside it.

    And it bears keeping in mind the fact that the Southern Oscillation (the second half of the El Nino – Southern Oscillation) lies principally in the South Pacific Ocean. It would seem that could matter a great deal. Especially since deep water formation in both areas appear to be considerably stronger than the two up by Greenland.

    However, this may actually be a point in favor of your theory. If the deep water formation in the North Atlantic is weaker than the deep water formation in the Southern Ocean then it may be more easily “tipped” then amplified by means of positive feedback in the Thermohaline Circulation.

    If this is so, then the Pacific Decadal Oscillation might be entrained by the Atlantic Multidecadal Oscillation. And if so the Pacific Decadal Oscillation will tend to lag the Atlantic Multidecadal Oscillation — and it shouldn’t be any surprise that the periodicities of the two oscillations are of roughly the same length. If this lag were say five to fifteen years then it might very well explain the lagged correlation that you seem to have discovered between AMO and global average temperature anomaly. Furthermore, your theory and Atmoz’s analysis:

    On the Relationship between the Pacific Decadal Oscillation (PDO) and the Global Average Mean Temperature
    Atmoz, 3 Aug 2008
    http://atmoz.org/blog/2008/08/03/on-the-relationship-between-the-pacific-decadal-oscillation-pdo-and-the-global-average-mean-temperature/

    … might very well be two closely woven threads of the same cloth.
    *
    Section 2: Somewhat Foggy

    David B. Benson wrote (July 21, 2010 at 11:46 pm):

    As for aerosols from 1940 to around 1980, AMO will be affected by those as well as MOC; I made that clear, but you didn’t read carefully enough. Of possibly greater interest in the subsequent growth in ABC, Asian Brown Cloud.

    Not as I read it. You stated (July 20, 2010 at 12:20 am):

    AMO explains the mid-century cooling as due to a change in MOC rate…

    To be blunt about it, the PDO is unlikely to be a good indiex of internal variability based on the known locations and magnitudes of deepwater formation. Since lnCO2+AMO explains all but random noise, there is no need to use what would offer only a minor contribution, be it PDO or aerosols.

    Superficially at least this would suggest that to the extent that you consider the Atlantic Multidecadal Oscillation responsible aerosols cannot be responsible. That is, unless you consider forcing to be what drives the Atlantic Multidecadal Oscillation where the forcing may be solar, greenhouse gases and aerosols.

    But at first you seemed to be taking the Atlantic Multidecadal Oscillation as a starting point of causation for explaining natural variability. Then you seemed to take the AMO to be driven by changes in the Meridional Overturning Circulation — or alternatively, the Thermohaline Circulation. Then you seemed to suggest that the Thermohaline Circulation was at least in part driven by variations in CO2 forcing — but no mention was made of aerosols.

    However, if you see the Thermohaline Circulation as driven largely by and sensitive to variation in net climate forcing (including solar radiance, greenhouse gases and aerosols) then we are essentially in agreement on what I consider to be the central issue.

  68. Seeing More Deeply, Part III: Tidying Up
    Section 1: Somewhat Afield

    David B. Benson wrote (July 21, 2010 at 11:46 pm):

    On of the advantages of using decades is that both ENSO and the solar cycle are largely averaged out; simplicity is a virtue.

    The 50-70 year periodicity of the Pacific Decadal Oscillation (PDO) shouldn’t average out any more than the roughly 50-70 year periodicity of the Atlantic Multidecadal Oscillation. And if warm phases of the imply longer, stronger and more frequent El Ninos whereas the cold phases of the PDO imply longer, stronger and more frequent La Ninas then no, it won’t average out. I mentioned this above, both in my comment of July 15, 2010 at 6:47 pm and Oceans Apart? Part I of II (July 17, 2010 at 9:25 pm).
    *
    David B. Benson wrote (July 21, 2010 at 11:46 pm):

    As for a relationaship between PDO and ENSO, AFAIK this is merely a correlation (which you disparage) and for which not even Granger causality has been established.

    What I disparage is an analysis of correlation that doesn’t take into account autocorrelation. And I disparage correlation based upon 13 data points — although I would disparage correlation based upon 2 data points even more.

    However, I believe the correlation between PDO and ENSO is considerably stronger than you might think. In addition to the correlation between PDO and ENSO phases over time you have the fact that the two are virtually identical in spatial distribution but for the fact that PDO is strongest in the North Pacific whereas ENSO is strongest in the Equatorial Pacific. But during their warm phases both are cool in the North Pacific and warm in the Equatorial Pacific.

    You can see this here:

    Figure 1 Warm Phase PDO and ENSO.

    The Pacific Decadal Oscillation
    http://cses.washington.edu/cig/pnwc/aboutpdo.shtml

    So there is a bit more to go on than simply a temporal correlation between two scalar values. What we have is an areal and temporal correlation between two two-dimensional fields that vary over time.

    Essentially, over the entire area of an ENSO, the warm phase of the PDO would appear to result in constructive interference with the warm phases of ENSO and destructive interference with ENSO’s cool phases, whereas the cool phase of PDO results in deconstructive interference with the warm phases of ENSO and constructive interference with ENSO’s cool phases. So it shouldn’t be any surprise at all that El Ninos are typically stronger, longer and more frequent during the warm phase of PDO and La Ninas are typically stronger, longer and more frequent during the cool phase of PDO.
    *
    Section 2: After the Decimal

    David B. Benson wrote (July 21, 2010 at 11:46 pm):

    Of greater concern is the (small) possiblity that my program miscalculates decadal averages. I’ll check and report later.

    If as you later suggest the fact that I couldn’t reproduce the values was due to NASA fiddling with the figures this is something that occurred to me. Part of what I meant by different datasets. I didn’t think of it at the time, but the Internet Archive might help there. And for online analyses you might consider WebCite — which maintains a dated image of the page that you are referring to. Something that wasn’t available at the time that you performed your calculations but is now a tool that I find valuable in a variety of contexts.

    **

    Afterward

    In any case, I believe I can see some issues considerably more clearly than before. As the result of your insights, Barton’s and the insights others that I was able to bring to bear. Although there was a bit more friction than I might expect, our exchange certainly had for me the power of dialogue, where sparks of insight gather together and become the source of far greater illumination than any one insight is capable of in itself.

  69. Tim, David, thanks for keeping it together through this debate. You guys have both done valuable stuff in this field and we’re all on the same side. [Patriotic music up… images of eagles, American flags, the Statue of Liberty…]

  70. David B. Benson

    Timothy Chase // July 22, 2010 at 7:13 am (and earlier) — Everybody writes 2xCO2 and by analogy I write AxAMO.

    I have no idea why ocean heat uptake varies as a QPO of long standing (over 500 years for sure). I just note AMO is an index for that (plus other minor factors) and apply it in the same decade, no lag. The decadal lag for applying lnCO2 stems from the known physics, albeit crudely. But about 1/3rd of the AMO (A ~ 1/3) clearly explains that minor portion of global decadal temperatures not explained by lnCO2. Now the AMO will clearly be affected by aerosols over the North Atlantic, irrespective of deepwater formation rate, which might possible also be affected by aerosols, I dunno for sure. The AMO will also be affected by nonlinear changes in TSI. So both of those forcings, as well as other nonlinear affects, are all bundled into one nice package. Being mostly changes in ocean heat uptake rate, it explains rather nicely the wobbles in the secular temperature trend.

    You are mistaken about ENSO. The southern measurement point thereof is at Darwin, Australia; check the latitude. There are so few SSTs recorded for the Southern Ocean that even GISTEMP has to skip large portions of it. The deepwater formation there is largely (it is thought) more than balanced by the deepwater upwelling under the Southern Ocean sea ice. There are only a few other locations with deepwater upwelling; I know of two off the east coast of Africa and that which alternates between the PNW and the east coast of Alaska. This latter alternation is the physical manifestation of the PDO, at least for fisheries purposes. In any case the entire SH is just a dead weight being dragged around by the active NH; not entirely as there sometimes is the polar seesaw, but the general concept of a more passive SH is a decent one.

    Barton Paul Levenson // July 22, 2010 at 1:14 pm — Thanks, we are trying. But I’ll trade in all that symbolism for one decent writer; say Thomas Jefferson.

  71. David B. Benson

    cause: 1 a : a reason for an action or condition : motive b : something that brings about an effect or a result, from
    http://www.merriam-webster.com/dictionary/cause

    C causes result R. How do we know that? Now
    http://en.wikipedia.org/wiki/Causality_(physics)
    doesn’t seem to help, but
    http://en.wikipedia.org/wiki/Causality#Causal_Calculus
    suggesting using Bayesian reasoning so instead we have
    I opine C causes result R with (subjective) probablity p, the prior. I collect evidence E and use Bayes’s rule to update the (subjective) probability to posterior P.
    Some of the evidence might be in the form of
    http://en.wikipedia.org/wiki/Granger_causality
    http://www.scholarpedia.org/article/Granger_causality
    although I do not know how to formally do so. Nonetheless, it seems plain that predicability is at least part of inferring causation.

    Indeed BPL has worked this out using at least 130 years of global temperatures as the dependent variable. But I have done no correlatiions on 13 decades of data other that the autocorrelations (which are meaningful). Using just lnCO2 alone the autocorrelations strongly suggest a QPO is present in the residuals. Using lnCO2+AMO result in autocorrelations demonstrating nothing else is present other than random noise. Indeed looking at the first set of autocorrelations finally led me to the AMO.

  72. Seeing as the above essay is about the Arctic sea ice I thought people might like a bit of an update.

    Looking at the NSIDC it was updated two days ago. They attribute the fast rate of decline through May and June to a strong Arctic Dipole. This broke down early part of July and was replaced by low pressure cyclones which greatly slowed the rate of melt so that sea ice extent was no longer below 2007’s to-date figure and had nearly equalled 2006’s to-date. However, past few days the Arctic Dipole has been rebuilding and now the 2010’s sea ice extent appears to be tracking midway between 2006 and 2007. Where things go from here will depend upon whether or not the dipole continues to build.

    Please see:

    Arctic Sea Ice News And Analysis
    http://nsidc.org/arcticseaicenews/

    Related news: UW Polar Science Center is once again updating their sea ice volume anomaly chart with the most recent update being for 2010-09-17 which I believe was posted today. The sea ice volume anomaly briefly dropped below -11,000 km^3 but has risen roughly back to where it had been prior to the update at above -11,000 km^3. As of yet it does not appear to be showing the effects of a rebuilding Arctic Dipole.

    Please see:

    Arctic Sea Ice Volume Anomaly
    http://psc.apl.washington.edu/ArcticSeaiceVolume/IceVolume.php

  73. Regarding the University of Washington Polar Science Center’s Arctic Sea Ice Volume Anomaly…

    Hank tells me a number of people were watching it, himself included. A few saw the 2010-07-17 graph go up on the 17th. But like me he was watching and it didn’t go up until yesterday for him, either. And it wasn’t due to browser caches (as he empties his, and I am pretty sure I did a refresh as well), so he believes they have multiple servers that aren’t necessarily kept in sync.

    Reminds me of when the British Centre for Science Education switched from one server to the another and it took about a day for the new ip address to propogate to all the directing servers. We had email lists that people could use through the BCSE’s web-based interface at the time. A lot of messages weren’t showing up, but only for some people, not others. In that case our web master actually had to move some of the messages over that had been posted at the old server.

    Then there was that old growing delay between different members of an IRC chat room that would become longer and longer until the connection was entirely lost and it was time to reconnect. There was a term for that, but I can’t remember what it was though.

  74. East and West
    Part I of II: The Atlantic

    David B. Benson wrote (July 22, 2010 at 10:29 pm):

    Everybody writes 2xCO2 and by analogy I write AxAMO.

    This is minor point as it is a detail more of form than substance and a rather small one at that, but as it has to do with how well people get their point across, I would argue against doing it this way.

    2 is a number, and therefore there won’t exist the possibility of confusion that 2x is a single variable. And since x is lower case but the C in CO2 is upper case people will typically keep from making the mistake of believing that xC… is some sort of variable.

    But since A is a variable the confusion can quite easily arise. In arithmetic the x is more or less necessary as it demarcates one variable from another. But by the time you get to algebra the x as a sign for multiplcation is usually dropped — I believe precisely because of the potential for confusion.
    *
    David B. Benson wrote (July 22, 2010 at 10:29 pm):

    I have no idea why ocean heat uptake varies as a QPO of long standing (over 500 years for sure).

    This is a point that interests me. You are referring to it as “heat uptake.” However, at least in the case of El Nino which also largely involves a rise in temperature in the tropics, the rise in temperature at the surface isn’t heat uptake by the ocean but heat being released from the ocean into the atmosphere as a body of warm water that had been well below the surface is finally able to rise to the surface and heats the atmosphere. Then as the El Nino decays that warm water spreads as the result of ocean circulation — and that is actually when you see the global temperature anomaly grow the most.

    In the case of the Atlantic Multidecadal Oscillation do we actually know whether the increase in temperature anomaly at the surface results in net heat uptake or heat release by the Atlantic Ocean? You keep refering to it as “uptake” but if it were in this respect comparable to El Nino it would be net heat release.

    Honestly I am not disagreeing with you here. I am actually curious and would like very much to know the answer.
    *
    David B. Benson wrote (July 22, 2010 at 10:29 pm):

    I just note AMO is an index for that (plus other minor factors)…

    I would consider AMO to be the thing being measured, not the measure itself. There is an ENSO index that is the standard index for measuring the phase and strength of El Nino Southern Oscillation and there is a PDO index for the Pacific Decadal Oscillation as well, but both are numerical measures of the phenomena, not the phenomena themselves — as is indicated by the maps here:

    Figure 1 Warm Phase PDO and ENSO.

    … available at:

    The Pacific Decadal Oscillation
    http://cses.washington.edu/cig/pnwc/aboutpdo.shtml
    *
    David B. Benson wrote (July 22, 2010 at 10:29 pm):

    … and apply it in the same decade, no lag.

    Then it would seem that as a temperature anomaly that is occuring simultaneously with other temperature anomalies such as El Ninos, its state at a given time wouldn’t normally be regarded regarded as something that could serve as a causal explanation for them as they are measured at the same time. That is, any more than they could serve as a causal explanation for it.

    But as you are no doubt well aware there exists a long distance correlation between temperature anomalies — one that typically reaches a great deal further than the correlation between temperatures. Which is one reason why NASA estimates the average global temperature anomaly rather than average global temperature.
    *
    David B. Benson wrote (July 22, 2010 at 10:29 pm):

    The decadal lag for applying lnCO2 stems from the known physics, albeit crudely.

    The known physics being that increased levels of carbon dioxide reduces the rate at which thermal radiation is able to escape at the top of the atmosphere. But it does next to nothing to reduce the rate at which radiation enters the climate system. Therefore it creates an imbalance between the rate that energy enters the climate system as the earth is warmed by the absorption of sunlight.

    And it takes a while for the net rate at which energy is entering the climate system to warm that system. However, as the climate system gradually warms, the rate at which it emits thermal energy increases until it reaches a quasi-equilibrium where the rate at which thermal energy enters the system is equal to the rate at which thermal energy escapes the system.

    Although oddly enough, it is my understanding that according to climate model simulations if we were to entirely eliminate anthopogenic CO2 emissions today rather than continue with Business As Usual (BAU) we would see very little change in the global average temperature anomaly for roughly 40 years. Only after that time would the temperature anomalies of the two alternate future trajectories of the earth (the one being with the total elimination of CO2 emissions the other with Business as Usual) begin to really diverge. In no small part this is due to the fact that our deep oceans have a great deal of “thermal inertia.” They have to absorb a great deal of energy before they begin to warm appreciably.

    This being the case I have to ask why the lag that you see is only ten years rather than something a bit greater than 40 years. I am not arguing that you are wrong on this point. As a matter of fact I believe I remember seeing something about it being ten years as well. But it puzzles me.

  75. East and West
    Part II of II: The Pacific

    David B. Benson wrote (July 22, 2010 at 10:29 pm):

    You are mistaken about ENSO. The southern measurement point thereof is at Darwin, Australia; check the latitude. There are so few SSTs recorded for the Southern Ocean that even GISTEMP has to skip large portions of it.

    The measurement being for the index, not the anomaly itself. However, you are right about SSTs not being measured so much in the southern ocean. As a matter of fact, while both ENSO and PDO clearly reach fairly far south as you can see here:

    … and here:

    … by comparing their warm and cool phases it is hard to say whether they in fact reach the Southern Ocean itself — for the very reason that you have pointed out. And you can see that here:

    The Real PDO Warm Phase?

    … from:

    On the Relationship between the Pacific Decadal Oscillation (PDO) and the Global Average Mean Temperature
    Atmoz, 2008-08-03
    http://atmoz.org/blog/2008/08/03/on-the-relationship-between-the-pacific-decadal-oscillation-pdo-and-the-global-average-mean-temperature/

    The temperature data that Atmoz is performing prinicipal component analysis on reaches only a little further south than New Zealand. Yes, this is further south than Melborne, Australia and even further south than Tasmania. But not by much. Not really my point — in as much as I was concerned with the possibility of a causal link between deep water formation and surface temperature — as both are part of the Thermohaline Circulation and will likely be linked even if they aren’t right on top of one another.

    But I argued that despite the fact that there is more deep water formation taking place in the Southern Ocean than in Greenland I thought that the deep water formation taking place around Greenland might play a greater role in explain global temperature anomaly variation, that is, once one removes the linear trend, or alternatively, the trend due to growing levels of carbon dioxide.

    The reason being? Precisely the fact that there was less deep water formation taking place and that it was more likely to experience comparatively larger fluctuations that could then be amplified by the Thermohaline Circulation in the North Altantic.
    *
    David B. Benson wrote (July 22, 2010 at 10:29 pm):

    The deepwater formation there is largely (it is thought) more than balanced by the deepwater upwelling under the Southern Ocean sea ice.

    I wouldn’t be surprised.
    *
    David B. Benson wrote (July 22, 2010 at 10:29 pm):

    There are only a few other locations with deepwater upwelling; I know of two off the east coast of Africa and that which alternates between the PNW and the east coast of Alaska. This latter alternation is the physical manifestation of the PDO, at least for fisheries purposes.

    I had wondered about the Pacific Decadal Oscillation. This helps to explain why the Pacific North West is so strongly affect. But there is another aspect to it, I think.

    As the Northern Hemisphere has comparatively more land than the southern it has less thermal inertia. Therefore the temperature swings that it is subject to will be greater, and this should have more of an effect upon ocean circulation in the north that the smaller swings in temperature will have on circulation in the south. This is in fact a component that you point to later when you state(July 22, 2010 at 10:29 pm):

    In any case the entire SH is just a dead weight being dragged around by the active NH; not entirely as there sometimes is the polar seesaw, but the general concept of a more passive SH is a decent one.

    Then there is another component, one that is independent of ocean circulation. If temperatures swing more in the north then temperature variation in the north will have more effect upon the variation in global temperature simply because there is less temperature variation in the south.
    *
    David B. Benson wrote (July 22, 2010 at 10:29 pm):

    Now the AMO will clearly be affected by aerosols over the North Atlantic, irrespective of deepwater formation rate, which might possible also be affected by aerosols, I dunno for sure. The AMO will also be affected by nonlinear changes in TSI. So both of those forcings, as well as other nonlinear affects, are all bundled into one nice package.

    Not quite so neat, actually, and this is a bit of a problem for what I was suggesting earlier, although exactly how much is another question. Our anthropogenic tropospheric aerosols have a residence time of roughly ten days.

    US emissions will tend to be carried over the Atlantic, European over Asia and Asian over the Pacific. Throughout most of the 20th Century the good majority of industrial production took place in the United States and Europe.

    In fact, as the United States accounted for roughly half of the Gross World Product at the end of World War II. As such the United States would likely have accounted for roughly as much of the sulfates and nitrates that we were putting into the atmosphere at the time.

    This being the case, the reflective aerosols due to such sulfates and nitrates would have been distributed principally over the North Atlantic and Asia. And this being the case, the forcing due to sulfates and nitrates would have been more localized than say carbon dioxide.

    In fact this is large part of what Tamino’s earlier post was all about:

    If man-made aerosols were responsible for the cessation of global warming mid-century (which they were), and the climate effect of man-made aerosols tends to be concentrated in the region of emission (which it does), and the vast majority of industrial activity is in the northern hemisphere (which it is), then the mid-century cooling impact of aerosols should be concentrated in the northern hemisphere.

    Hemispheres
    Tamino, August 17, 2007

    Given its far greater residence time carbon dioxide becomes much more evenly distributed. Therefore it is much more appropriate there to say that forcing is forcing. But even then carbon dioxide warms the troposphere while cooling the stratosphere whereas solar insolation warms both.

    Likewise insofar as carbon dioxide reduces the rate at which thermal energy escapes into space it will have more of an effect towards the poles whereas increased solar insolation will tend to warm the tropics more. This is due to net thermal energy lost by radiation to space towards the poles but net thermal energy gained by radiation towards the equator — with net thermal energy transport being poleward due to atmospheric and oceanic circulation.

    However, the distribution of anthropogenic troposopheric aerosols is changing. While we the United States and Europe cleaned up their act (largely) with respect to reflective sulfates and nitrates, the Asian Brown Cloud is relatively recent, being largely due to the growing Chinese economy. It will disproportionately affect the Pacific Ocean. For a while.
    *
    Incidentally, my favorite quote has long been, “I vow, upon the altar of God, eternal hostility towards every form of tyranny over the Mind of Man.”

  76. David B. Benson

    Timothy Chase // July 23, 2010 at 10:11 pm — Would it be better for me to write 2*CO2 and A*AMO?

    Ocean heat uptake means that the deep ocean is removing heat from the surface (atmosphere+shallow ocean). AMO is an index for the variations in this rate. And by the way, I meant to write that deepwater upwelling occurs in two locations along the west coast of Africa; just west of the deserts, south and north.

    ABC aerosols are clearly affecting me; in the summer time I see it every evening. Affects you as well, but you may have too much locally produced pollution for the effect to stand out?

    As for the period following WW II, CO2 concentrations went up only a tiny amount, with a few years of actual decline. The AMO was negative around then, suggesting a larger than average ocean heat uptake as well as the aerosol effect.

    The decadal lag in applying LnCO2 crudely fits the known physics of the shallow ocean. A better fit is given by Tamino’s two box model, linked in my study.

    As for your earlier complaints about R^2 and the number of data points, use instead
    http://en.wikipedia.org/wiki/Coefficient_of_determination#Adjusted_R2
    with degenerate values of zero for but one data point and -infinity for two data points and one parameter. With 13 data points and but two parameters the adjustment is small.

  77. David B. Benson

    Timothy Chase asks why one decade lag rather than, say, four. The simplest answer is that it works better than the alternatives I have studied, although a much more complex statistical procedure is used in one of the models in Tol, R.S.J. and A.F. de Vos (1998), ‘A Bayesian Statistical Analysis of the Enhanced Greenhouse Effect’, Climatic Change, 38, 87-112.

    Consider another simplified one box model, a linear system with a reservoir with time measured in units of the characteristic time of the linear system. Applying a unit step input results in the initially empty reservoir being 29% full in 1/2 of a time unit, 63% full in one time unit, 86% full in two time units, 95% full in three time units and 98% full in four time units.

    Treating the atmosphere+shallow ocean as such a linear system with a characteristic time of one decade (too short and also a two box model is superior) we see that a sudden decrease in CO2 concentration, if maintained against ocean outgassing, would be close to equilibrium again in 3–4 decades.

  78. David B. Benson wrote (July 24, 2010 at 1:39 am):

    Would it be better for me to write 2*CO2 and A*AMO?

    No in the former since we are actually referring to a doubling of CO2 rather than simply twice the CO2, so in that case 2xCO2 actually makes more sense than say 2CO2 or 2*CO2, either of which would imply multiplication. As in chemistry where you will have twice as much of one molecule as there are of another that participate in a given chemical reaction.

    But in the case of 2xCO2 you are indicating something more specific — the doubling not of quantity but of concentration relative to a given concentration, usually the”pre-industrial” value of 275 ppm. Anyway, I thought of mentioning as much in what you were responding to, but it seemed a bit much at the time. (Not for you so much as for a more general audience — particularly given the weight that I associated with the topic.)

    And as I point out before, the latter AxAMO invites confusion insofar as several letters in a row are typically used to refer to a single variable. So yes in the latter case.
    *
    David B. Benson wrote (July 24, 2010 at 1:39 am):

    Ocean heat uptake means that the deep ocean is removing heat from the surface (atmosphere+shallow ocean).

    I realize that when you refer to heat uptake by the ocean this means that it is absorbing heat from the rather than releasing it. But that was not my question. I was instead asking whether you in fact know that that the heat uptake is occurring during the warm phase of of the Atlantic Multidecadal Oscillation since we know that during the warm phase of El Nino the El Nino is actually releasing heat.

    Please see East and West, Part I:

    You are referring to it as “heat uptake.” However, at least in the case of El Nino which also largely involves a rise in temperature in the tropics, the rise in temperature at the surface isn’t heat uptake by the ocean but heat being released from the ocean into the atmosphere as a body of warm water that had been well below the surface is finally able to rise to the surface and heats the atmosphere. Then as the El Nino decays that warm water spreads as the result of ocean circulation – and that is actually when you see the global temperature anomaly grow the most.

    In any case, I asked was genuinely interested — just as I stated — and hoped that you might know the answer which I have so far been unable to find myself.
    *
    David B. Benson wrote (July 24, 2010 at 1:39 am):

    AMO is an index for the variations in this rate.

    Actually I believe it is a measure of average temperature anomaly:

    The Atlantic Multidecadal Oscillation, or AMO index, represents annual ocean temperature anomalies averaged across the North Atlantic (0-70° N).

    Pacific and Atlantic Ocean Influences on Multidecadal Drought Frequency in the U.S.
    Greg McCabe et al, April 2004 (USGS)
    http://wwwpaztcn.wr.usgs.gov/rsch_highlight/articles/200404.html

    … that is detrended, that is, where the global warming trend (that is assumed to be linear) has been subtracted:

    Fig. 5.58. (a) AMO index: the ten-year running mean of detrended Atlantic sea surface temperature anomaly (SSTA, °C) north of the equator.

    Introduction to Tropical Meteorology: Chapter 5 Tropical Variability, 5.3 Sources of Decadal Variability, 5.3.2 The Atlantic Multidecadal Oscillation (AMO)
    http://www.meted.ucar.edu/tropical/textbook/ch5/tropvar_5_3_2.html

    … but if we are in fact speaking of a running mean at least this would imply a lag of sorts that is built into the index in the sense that it is the average over the past ten years. I believe there is also simply a “raw” monthly AMO index that is based off of current temperatures and an annual in addition to that which averaging over the past decade. Not sure which you used. However, the following:

    Upper panel: AMO index: the ten-year running mean of detrended Atlantic sea surface temperature anomaly (SSTA, °C) north of the equator.

    http://www.aoml.noaa.gov/phod/faq/amo_fig.php

    … that you linked to from your comment at Real Climate refers to the 10-year running mean. In the same comment dated 29 March 2010 at 2:26 PM you of course refer to the detrending, the deep ocean as a heat sink, etc..

    So it would seem that you already knew it is an average temperature anomaly of sorts — where the averaging is over area and time, and that when you refer to it as ” an index for the variations in this rate [the rate at which ” the deep ocean is removing heat from the surface”] you were being less than forthright.

    Of course the rate of transfer of heat will no doubt vary depending upon the temperature of the ocean — as the result of the temperature differential that exists between the ocean and the atmosphere. But then that is a temperature differential between ocean and atmosphere, not temperature anomaly of the ocean relative to a trend.
    *
    David B. Benson wrote (July 24, 2010 at 1:39 am):

    ABC aerosols are clearly affecting me; in the summer time I see it every evening.

    I do see that the haze is sometimes visible in California:

    The brownish haze, sometimes in a layer more than a mile thick and clearly visible from airplanes, stretches from the Arabian Peninsula to the Yellow Sea. In the spring, it sweeps past North and South Korea and Japan. Sometimes the cloud drifts as far east as California.

    U.N. Reports Pollution Threat in Asia
    2008-11-14
    http://www.nytimes.com/2008/11/14/world/14cloud.html?_r=1

    *
    David B. Benson wrote (July 24, 2010 at 1:39 am):

    Affects you as well, but you may have too much locally produced pollution for the effect to stand out?

    Undoubtedly it affects me. What I had said in East and West, Part II was:

    Our anthropogenic tropospheric aerosols have a residence time of roughly ten days.

    The residence time is simply the mean amount of time that particles or molecules will remain within the atmosphere. It doesn’t mean that they all suddenly drop to the ground at the end of that period. And as most of the aerosols drop out before reaching the US west coast, the cooling effect of the reflective aerosols should be felt primarily over the Pacific Ocean, just as the reflective aerosols emitted by the United States would have their cooling effect primarily over the Atlantic, and the cooling effect of Europe’s would have been primarily felt over Asia prior to the Pacific Ocean. And just as the cooling effect of aerosols emitting in the higher latitudes of the Northern Hemisphere would have their cooling effect primarily in the Northern Hemisphere:

    If man-made aerosols were responsible for the cessation of global warming mid-century (which they were), and the climate effect of man-made aerosols tends to be concentrated in the region of emission (which it does), and the vast majority of industrial activity is in the northern hemisphere (which it is), then the mid-century cooling impact of aerosols should be concentrated in the northern hemisphere.

    Hemispheres
    Tamino, August 17, 2007

    … the cooling effects of the reflective aerosols should be strongest where those aerosols are thickest.
    *
    David B. Benson wrote (July 24, 2010 at 1:39 am):

    As for the period following WW II, CO2 concentrations went up only a tiny amount, with a few years of actual decline. The AMO was negative around then, suggesting a larger than average ocean heat uptake as well as the aerosol effect.

    Actually according to your table that you provide in the comment at Real Climate dated 29 March 2010 at 2:26 PM you state that the Atlantic Multidecadal Oscillation was positive for the 1930s, 1940s and 1950s, although you list the 1920s as negative. The graph that you link to from there:

    http://www.aoml.noaa.gov/phod/faq/amo_fig.php

    shows the same thing. Which suggests that you were using a ten-year running mean in your calculations.
    *
    David B. Benson wrote (July 24, 2010 at 1:39 am):

    The decadal lag in applying LnCO2 crudely fits the known physics of the shallow ocean. A better fit is given by Tamino’s two box model, linked in my study.

    Yes, you state:

    The decadal delay in applying the forcing is a simplification of the two box model studied in
    https://tamino.wordpress.com/2008/10/19/volcanic-lull/
    https://tamino.wordpress.com/2009/08/17/not-computer-models/
    from which I determine that about 11-13 years would be slightly better, but thought decadal averages would be more helpful in showing the essense of the climate response.

    … and in “Not Computer Models” Tamino uses a characteristic timescale of 30 years for his ocean:

    I’ll allow the atmosphere to respond very quickly (in a single year) while for the oceans I’ll use a timescale of 30 years.

    Not Computer Models
    Tamino, August 17, 2009

    At first it is the characteristic timescale of the atmosphere that matters most, then the ocean. If I understand this correctly, the response time of 11-13 years is largely due to how quickly the atmosphere responds to a single slug. However, the longer carbon emissions go on the more warming there will be in the pipeline due to elements of the system with longer characteristic timescales — or so I would presume. But this won’t change the “response time” to a given slug at a given time. And this is why using a lag time of 10 (or 11-13) years is justified — as a rough approximation.

    Anyway, a somewhat relevant post at Real Climate is:

    Friday Roundup, Section “The sweet spot for climate predictability”
    13 July 2007
    http://www.realclimate.org/index.php/archives/2007/07/friday-roundup/
    *
    David B. Benson wrote (July 24, 2010 at 1:39 am):

    As for your earlier complaints about R^2 and the number of data points, use instead
    http://en.wikipedia.org/wiki/Coefficient_of_determination#Adjusted_R2
    with degenerate values of zero for but one data point and -infinity for two data points and one parameter. With 13 data points and but two parameters the adjustment is small.

    I didn’t know that about the adjusted R^2. Interesting.

    But as Barton puts it:

    There is something called the ‘spurious correlation problem’ when dealing with two series increasing over time–they may seem to be correlated just because both are rising.

    Once he takes it into account R^2 drops from 0.764 to 0.6. Given the averaging that you are performing at the decadal level and the extent to which this averages out variation I can only assume that the spurious correlation problem for your calculations is considerably greater.
    *
    In any case It would appear that some of the “mistakes” you made were for the sake of humor, such as when you played dumb about the AMO being essentially an average temperature anomally (i.e., “AMO is an index for the variations in this rate”), or when you stated that it had been negative when it was positive (i.e., “The AMO was negative around then, suggesting a larger than average ocean heat uptake as well as the aerosol effect”), but at other times you have been genuinely helpful — such as when you directed me to the two box model for explaining the lag time being used for slugs of carbon dioxide.

    Saying things that you know to be incorrect isn’t simply misdirecting me, but misdirecting anyone who visits this blog, reads what you say and walks away without seeing through it. For others it will make proponents of science appear less than honest. And for other proponents of science it will make the use of more relaxed standards of argumentation appear acceptable.

    I would ask that you quit. There are other ways to introduce humor if that is your aim.

    That said, I genuinely appreciate those times that you have been helpful.

  79. David B. Benson,

    If you think NASA makes you work to keep your calculations current with the most recent revision of their data you should have a look at NOAA. I did calculations earlier this year and I have had to re-enter data twice now, this time going back to January 2009.

    Oh joy.

  80. David B. Benson

    Timothy Chase // July 24, 2010 at 6:46 pm — Here is the AMO data I used.
    http://www.esrl.noaa.gov/psd/data/correlation/amon.us.long.data
    which I referenced. It is monthly data and states at the bottom ” AMO unsmoothed from the Kaplan SST V2″.
    I misremembered just when the AMO was positive and negative. No attempt at humor, my apologies for the error.

    In any case, my interpretation of the AMO is as an index of internal variability; when positive MOC rate is lower so more heat remains in the atmosphere+shallow ocean; when negative the reverse. This is of course modified by effects of aerosols and all other nonlinear portions of other forcings, including lnCO2, which happened to affect the North Atlantic. Those effects are rather small, I claim, as shown by the structure of the residuals.

    Of course, AMO is calculated much as you state; I’m using it as a proxy, although I prefer the term “index”.

    This is considerably different than ENSO which affects only the atmosphere+shallow ocean via heat redistribution; during El Nino heat is redistrubted from the Pacific Warm Pool eastwards and upwards; during La Nina heat is redistributed from the Pacific Warm Pool westwards and upwards (with other effects in both cases).

    BPL does not compute R^2, AFAIK, but rather just the correlation coefficient in
    http://bartonpaullevenson.com/Correlation.html
    There is a difference.

    I don’t have a “suprious correlation problem” because temperature is a known function of lnCO2; 1.2 K for 2xCO2 taken alone, but then more due to water vapor feedback about doubling that, less heat removed to the deep ocean. So it is simply a matter of calculating OGTR since one doesn’t quite know exactly the size of water vapor feedback and certainly ocean heat uptake remains poorly constrained, especially in the deep ocean.

    Thanks for the RC “sweet spot”; I’m not willing to predict more than one decade ahead with the overly simplified physics I’m using. That prediction of the 2010s looks bad enough to me!

    The two box model has two response times: the atmosphere plus the top few meters of the ocean (Tamino just says atmosphere, but because of wave action the uppermost ocean is entrained with regard to heat distribution) respond together with a characteristic time about one year; the ocean down to the main thermocline responds (in Tamino’s model) with a characteristic time of 30 years. I blend those two together into a single response time for which one decade is close enough. That clearly works fine, but only so long as lnCO2 does not decline. To study actual declines a two box model will be fine on the centennial scale.

    I hope I’ve covered everything and I certainly appreciate your interest.

  81. David B. Benson wrote (July 24, 2010 at 3:33 am):

    Consider another simplified one box model, a linear system with a reservoir with time measured in units of the characteristic time of the linear system. Applying a unit step input results in the initially empty reservoir being 29% full in 1/2 of a time unit, 63% full in one time unit, 86% full in two time units, 95% full in three time units and 98% full in four time units.

    Essentially it is governed by the exponential law of decay.

    The relevant post in this case is:

    Volcanic Lull
    Tamino, October 19, 2008

    There Tamino states:

    The atmosphere responds with a time scale of about 1 year. That doesn’t mean that the full effect is felt in a single year! It means that if a forcing is sustained, then a fraction 1/e (where e=2.718… is the base of natural logarithms) is felt in the first year. But the huge thermal inertia of the oceans gives that component of the system a much longer time scale.

    When he uses the phrase “if a forcing is sustained” essentially what he is refering to is equivilent to my “slug” of carbon dioxide released at the beginning of the year remaining in the atmosphere. Afterwards he goes on to deal with a model in which there are different components with different “characteristic time scales” and forcing that varies over time. As such it becomes necessary for him to employ integral calculus.

    But fundamentally what we are dealing with is exponential decay. Like a cup of coffee that starts out 40 C warmer than the room, but after 5 minutes is only 20 C warmer, then five minutes later 10 C warmer and then 5 minutes after that only 5 degrees warmer than the room.

    Not sure why, but I kind of like the idea that the law of exponential decay governs the absorption of light as through my cup, and likewise the law of exponential decay governs its loss of temperature to the environment.

  82. David B. Benson
  83. David B. Benson wrote:

    Are there also Google cache’s of…

    Yes, of course:

    Two Boxes
    Tamino, October 3, 2007
    Volcanic Lull
    Tamino, October 19, 2008

    Actually I had already included a link to “Volcanic Lull” above but forgot to give the link to “Two Boxes.” However, each time I linked to an earlier post somewhere in this thread…. If you or anyone else needs to find stuff let me know. (It is mostly using “site:…” though, with or without search terms, clicking the cache link, then trimming the address to make it look nicer.)

    timothy chase at gmail dottish com
    (no spaces)

    I’ll tell you how so you won’t be depending upon me. (I’ve always be a teach-a-guy to fish kinda guy.)

    Meanwhile I hope you don’t mind and I do apologize, but I still don’t know what to make of your equation. But I don’t think it is fair to either one us to try and force agreement. So sometimes you’ve got to agree to disagree. Still think you are an assET, though.

  84. Anyway, what I grabbed so far: 22 from 2007, 36 from 2008, 27 from 2009, 25 from 2010

    Did it back on Jul 12th. That’s my birthday. Couldn’t stand the idea of anything being lost, so I decided to give myself a birthday present.

  85. 21 from 2006. I am sure we can get more for the other years.

  86. David B. Benson,

    Alright. To my mind it looks credible — for whatever that is worth. I still will want to explore it more — but that is on my own time, though. Although I may have questions at one point or another.

  87. DBB: BPL does not compute R^2, AFAIK, but rather just the correlation coefficient in
    http://bartonpaullevenson.com/Correlation.html
    There is a difference.

    BPL: I do calculate r^2. r = 0.87, r^2 = 0.76. Look again. After Cochrane-Orcutt iteration, r^2 = 0.60. If you’re confused by my using small r instead of capital R, it’s because there’s only one independent variable, so the capital letter would be inappropriate. Or are you referring to adjusted or “shrunk” R?

  88. For the following years I have found the following number of *posts*…

    2006: 22, 2007: 62, 2008: 69, 2009: 44, 2010: 36

    There are still at least a few missing and I believe that some of those can still be retrieved, but this is how it stands currently. I have downloaded every one of those that I have included in those counts.

    I have not even looked at the comments. I believe nearly all of them are there but the comments (including my own) are pretty far down on my list of priorities. I hope no one is offended.
    *
    Now people wish to access these posts online there are at least three places they can go.

    The most comprehensive is Yahoo’s “Site Explorer.” Don’t worry about writing down the address — Yahoo will automatically bump you over to it when you perform the search I am about to give you.

    Then there is Google and Bing. Google is what I was focused on at first, but Bing has some that Google lacks, and at least Google contains a few that Yahoo lacks. Then of course you could try the Archive dot org, but its so spotty I am not sure you will find anything that the engines I have listed don’t already have.

    The searches? They are the same from one engine to the next. They are essentially of the form:

    site:tamino.wordpress.com/2009

    Change the year, drop it to search for material from any year, add a space at the end and include a search term or two if you want, etc.. Works on Yahoo, Google and Bing.

    You will get back search engine results. Although there are exceptions to the rule, typically each entry will have a cached result. However, on occasion the link to the cache will bring up an error page, at least on Yahoo. Try refreshing a couple of times and you may get some back. Coming back and hour or so later might help, too.

    Tamino, if you are missing any of the material and want to put it back up let me know and I can either zip each year, reformat, etc.. Whatever it is you need.

  89. Tamino, one other thought occurs to me. What might be easiest and would involve the least amount of editing the source and no reentry of material into a separate blog would be to create an archive of older posts. Simply set it up as its own independent static website.

    Archives of that sort aren’t that unusual. An individual decides to switch blog providers then leaves up the old blog read-only for those who wish to read older posts. And it would put everything (but for the current active blog) all in one place — so that wherever your current blog is you can simply include at the top a link to the archive.

    I know it isn’t quite everything, but I am betting it is fairly close…

    2006: 22, 2007: 62, 2008: 69, 2009: 44, 2010: 36

    233 posts retrieved so far.

  90. David B. Benson

    Barton Paul Levenson // July 25, 2010 at 11:42 am — Color me confused. I thought you determined the
    http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient
    Pearson’s r, and not the
    http://en.wikipedia.org/wiki/Coefficient_of_determination
    R^2 (and always capitalized).

    Timothy Chase // July 25, 2010 at 12:58 am — I have all three cached links now. Thank you.

    And please feel free to ask more questions or bring up further points.

  91. DBB,

    It is the Pearson product-moment correlation coefficient, r, when you measure it between two variables. It is the coefficient of multiple correlation, R, when you’re describing the effect of more than one independent variable on the dependent variable of interest, as in a multiple regression. Correlation squared is r^2, multiple correlation squared is R^2. And did you miss the fact that 0.76 is (roughly) the square of 0.87? What did you think I meant by “explained variance?”

  92. David B. Benson wrote (July 25, 2010 at 8:55 pm):

    And please feel free to ask more questions or bring up further points.

    Questions rather than points might a good way for me to start, at least when it comes to statistics. Oddly enough, I didn’t have much trouble with tensor calculus, covariant differentiation or wave functions or probability density operators, but statistics… never got that good of a handle on it. Although for handling fractal dusts I created a probability density calculus where different actualizations of the same probability function were mutually independent.

    Likewise I saw that the probability density operator could be views as the truth values of a two-dimensional array of statements where the values themselves are complex numbers. Then the application of the Fourier Transform whereby one shifted from momentum space to position space and back may be viewed derivation of the truth values of one array from another by means of the application of logical operations (AND and OR) to those arrays. But that was a couple of decades ago.

    Incidentally, someone called to my attention the fact that grabbing things from the caches won’t be necessary in the future — pointed me to the other thread. Good thing, too, as “The Wonderful World of Wavelets” was one the few items I couldn’t bring back, and it sounded fun and I saw it getting rave reviews.

    In any case, my apologies for dragging things on as long as I did – and for not starting with questions rather than assertions.

  93. David B. Benson

    Barton Paul Levenson // July 26, 2010 at 2:04 am — How absolutely annoying. It seems that R^2 can refer to the square of the coefficient of multiple correlation, R:
    http://mtsu32.mtsu.edu:11308/regression/level3/multicorrel/multicorrcoef.htm
    http://en.wikipedia.org/wiki/Multiple_correlation
    but also refers to the
    http://en.wikipedia.org/wiki/Coefficient_of_determination
    R^2 . I’ve never before now seen anything but the latter usage.

    Anyway, I guess that in
    http://bartonpaullevenson.com/Correlation.html
    you computerd Pearson’s r and then squared it to claim “explained variance”.

    Timothy Chase // July 26, 2010 at 3:40 am — No apolgy necessary. I, too, am finding I need to understand more statistics, as the exchange with BPL amply illustrates. Like you, I never needed much before beginning my study of climate.

  94. David B. Benson

    Timothy Chase // July 26, 2010 at 3:40 am — Oh yes, what’s this “other link” you mentioned?

  95. David B. Benson wrote (July 27, 2010 at 1:12 am):

    Oh yes, what’s this “other link” you mentioned?

    What I had written was (July 25, 2010 at 12:58 am):

    Actually I had already included a link to “Volcanic Lull” above but forgot to give the link to “Two Boxes.” However, each time I linked to an earlier post somewhere in this thread….

    … and what the ellipses were hinting at was that I had included a number of links to cached posts throughout the thread. Later, looking back I noticed that I had only included 6 links to cached posts. But using the caches of three different search engines I found 233 posts. At most I believe that is fifteen shy of every one that Tamino ever posted, although Tamino undoubtedly knows better. Using Yahoo, Google and Bing and the search modifier “site:” that works on all three.
    *
    Anyway, for what ever it is worth, something that I personally, strongly believe — which I wrote while on the evolutionary biology front in the war on science:

    … Properly, scientists will respect these beliefs of their religious colleagues, realizing they may very well provide those colleagues with the moral guidance which makes them better scientists. The importance of moral guidance, and, more specifically, the moral courage to deal with the ever-present possibility of failure in both the existential and cognitive realms, is not to be underestimated.

    In the existential realm, religion properly provides the individual with the moral courage to act despite the possibility of failure, where failure can sometimes mean the possibility of actual death, and the fear of failure itself can often be experienced as such. Likewise, the fear of being mistaken — where being mistaken may threaten our beliefs about who we are — is at times experienced as a threat much like death itself. Here, too, there is need for moral courage, although of a somewhat different kind. Properly, religion encourages in its own way the view that while recognizing one’s mistakes may be experienced prospectively as a form of death, the act itself brings a form of rebirth and self-transcendence, giving one the courage to revise one’s beliefs when confronted with new evidence.

    Religion and science
    http://www.bcseweb.org.uk/index.php/Main/ReligionAndScience

  96. PS

    Just so that you don’t misunderstand, I quote that passage not as advice that I am suggesting you follow but as a tribute to what I already see in you.

  97. >I found 233 posts….fifteen shy of every one that Tamino ever posted

    Has this been archived somewhere?

  98. Thanks bluegrue.

    And Timothy’s approach does find posts later than those in the Archive.

    I really hope to see the older posts come back so the many links on other blogs work again.

  99. andrew holder

    From what I understand Goddard has argued that the arctic has warm and cooling cycles lasting roughly 30 years. Last cool cycle ended approx at same time that satellite studies started (1979). So last 30 detailed years clearly show the warming cycle. If the PDO, La Nina, Solar activity & other natural phenomenae are being correctly interpreted by Goddard, Bastardi & others then we will expect a cooling period until approx 2040. I would like to hear debates based on last 100 years data.

  100. From what I understand Goddard has argued that the arctic has warm and cooling cycles lasting roughly 30 years. Last cool cycle ended approx at same time that satellite studies started (1979).

    Gosh, what a “coincidence” for which there is no empirical evidence.

    Goddard also believes it snows dry ice in the Antarctic and a bunch of other stuff totally at odds with reality.

    He’s innumerate and scientifically illiterate. Why anyone would imagine he knows better about arctic sea ice trends than the highly-educated experts who study the arctic for a living is beyond me.

    What long-term data exists does not support the claims Goddard makes.

    There’s no evidence that conditions in the Arctic 60 years ago were anything like today, despite Goddard’s supposed 60-year warming/cooling cycles.

  101. Andrew Holder:

    There’s no evidence of anything cyclical in this reconstruction of sea ice volume, nor is there any in measured sea ice extent. Where’s the sinusoidal shape that one would expect if Goddard’s right? Rather than slowing, we’re seeing the decreasing trend in the summer sea ice extent minimum *accelerate*, if anything.

    Just opposite of what Goddard insists is happening in his cooling world.

  102. Oh, yes, and further, in support of Goddard’s claim he predicted that this year’s summer arctic sea ice extent minimum would show a recovery to 2006 levels. Further “proof” in his eyes that the arctic is cooling.

    Of course, what happened is that we saw the greatest summer decrease in extent in the satellite era, leading to an absolute minimum that nearly tied for second place. Totally at odds of Goddard’s prediction.

    Goddard’s not worth the spinning electrons used to propagate his thoughts across the intertubes.

  103. Dhogaza – “Goddard also believes it snows dry ice in the Antarctic and a bunch of other stuff totally at odds with reality.”
    Well Dho, Exxon is working on a process to capture CO2 by freezing it. They call it carbon dioxide snow in their commercial!
    :)

    • Daniel Bailey

      Re: Exxon & CO2 snow

      My understanding was that the process had these limitations:

      1. It only works in Antarctica
      2. and only when Steve Goddard mans the crank (or was that when he cranks the man…)

      SG needs a nickname, like the Good Humor Man (because reading his posts always makes me laugh).

      The Yooper