Data and Code for Foster & Rahmstorf 2011

This post is only to provide access to the data and the code (all computations were done using R) used in Foster & Rahmstorf 2011 (blogged about here).


One change has been made to the programs. Since they make use of some custom subroutines which are usually accessed through the start-up file when launching R, these subroutines have been placed in a file called “subrouts.r” and the main programs have this line inserted:

source(“subrouts.r”)

in order to load those functions. It’s possible (but I don’t think so) that I’ve omitted one of the functions, in which case the R programs will fail — if that happens please let me know.

There are three programs. “allfit.r” performs computations for the globe, “nhfit.r” for the northern hemisphere, and “shfit.r” for the southern hemisphere. All the input data are in the single file “allfit.csv”.

I’ve also included the output from running “allfit.r” on the data, the two files “Adjusted.dat” which contains the adjusted temperature series (as well as the raw data, the model fit, and residuals), and “rates.txt” which gives the coefficients for each factor in the model. The coefficient labeled “tau” is the time coefficient. If you run the program “allfit.r” then it will overwrite these files, but they should be the same (except that in the included versions, I moved the headers around so they would be aligned with the data colums).

Unfortunately wordpress won’t allow me to upload a zip file. So I’ve pulled a little trick. I renamed the file “allfit.zip” to “allfit.xls” in order to fool the wordpress software into believing that it’s an Excel file. You will need to rename this file, back to the name “allfit.zip”, in order to unzip it and access the programs and data.

Here it is:

allfit

If you find a bug, please let me know. But I don’t want to answer everybody’s questions or explain how the regression or the programs work. If you know enough to make it work, great. If you need a tutorial, sorry.

I have also discovered that there is a minor error in one of the graphs in the paper. In figure 3, the y-axis labels for the coefficient due to TSI (total solar irradiance) are incorrect. They run from -0.05 to +0.25, but they should run from -0.05 to +0.20. Here is the figure with the mistake:

Here is a corrected figure:


Note that it’s the same except for the axis labels at the bottom left on the y-axis.

UPDATE:

I was notified that one of the subroutines was missing from the “subrouts.r” file. So, I’ve posted a new version which includes it. The correct file should be “allfit2.xls” which you must re-name to “allfit2.zip”

48 responses to “Data and Code for Foster & Rahmstorf 2011

  1. I noticed you used zero volcanic forcing for the data from 2000 to present. Hansen et al (2011) and the data on modelforce’s website show a volcanic forcing (albeit low) over the 2000s… I recognize that is forcing data whereas you’re using Amman’s data that ends in 2000 but is there possibly some missing contribution because of this choice?

    [Response: Actually we used the stratospheric AOD data from Sato et al., but when we started it only extended as far as 2000. They have since updated their data, and it now includes very small nonzero quantities during the 2000s. The new values are not large enough to affect the results substantively. You can of course substitute the new data for the old and run the program to find out just how much effect it has.]

  2. I usually use .piz to get .zip files through filters; it provides a hint at least.

    Notes on the script:

    – What is the point of plot(0,0,main=paste(lag1,lag2,lag3)) in the triple lag loops, other than to torment the user?

    – The code initially choked when it couldn’t find “findstart”. Is this a missing function, or is it from some R package? My R is rusty, but I tried the code below, which seemed to work.

    # findstart returns the first place in a vector t where t[i] > x
    # t is an ordered vector, x is the item you want to find
    # This function doesn’t do any error checking;
    # It just looks for the first place where t[i] >= x
    # If that doesn’t happen, it just returns 0

    findstart = function(t,x) {

    i=1
    result=0
    tlen = length(t)

    while (i <= tlen && result == 0) {
    if (x <= t[i]) {
    result = i
    }
    i=i+1
    }
    result
    }

    [Response: Answering questions about the code is one of the things I want to avoid. It’ll never stop.

    The “findstart” routine is necessary, and it missing. So I’ll post an updated zip file which includes it. Thanks for the notice.]

  3. Off topic, but I thought of tamino just now because I was looking at the table of contents for today’s Science and there is a statistical methods paper in there on non-linear correlations between variables in large datasets. Possibly very cool?

    http://www.sciencemag.org/content/334/6062/1518.abstract

  4. I’m curious. Do you think you’d be able to easily add either an estimate of aerosol concentrations or the changes in the Earth’s albedo (as a proxy for the net effect of any change in the Earth’s aerosols, but also including certain climate-feedbacks, like melting ice) as a fourth factor in your multiple regression?

  5. In anyone known of temperature anomaly map are available on a monthly basis? I suspect that some residual in the model might be due to a physical phenomenon. I would like to try the map the temperature anomaly using the residual signal of the model to extract it.

  6. A trick to hide the extension? Ha! Gotcha!

  7. Pinko Punko – the authors of the maximal information coefficient paper also have a website with downloads — a java package and an R wrapper/package.

    Thanks for bringing it to our attention.

  8. The zip file contains AllFit.csv while the source referrs to allfit.csv. Seems
    obvious enough to change one to match the other.

    [Response: On my PC the file name is not case-sensitive. Perhaps on other systems that’s not so? In that case, you’ll need to change either the programs or the data file so the names match.]

  9. Kevin- that is great. I’d love to see how it can be applied to lots of stuff.

  10. Thanks for the source files. At first I ran into problems when running them within the RStudio program (doesn’t like the stream of plots when testing lags). Further: no problems using the source on a PC with Windows 7.

  11. I’ve read some people complaining about the use of TSI for sun activity. In fact, the paper states that using sunspot numbers instead of TSI didn’t effect the results in a significant way.
    I tried another metric: open solar flux (OSF) as described by Lockwood and others (see http://www.eiscat.rl.ac.uk/Members/mike/Open%20solar%20flux%20data/), based on satellite measurements from 1975 onwards. I converted yearly data, smoothing into monthly data. Caveat: all 2010 monthly were set to the mean yearly 2010 value.
    This results in a figure almost identical to figure 5 in the Foster/Rahmstorf paper.
    Conclusion: whatever metric is used for solar activity, the result is nearly the same.

  12. “I’ve read some people complaining about the use of TSI for sun activity.” you talking about that clown at WUWT?

    Tamino are you planning to ignore that Lasner rubbish? Your choice but it’s the same old pretentious numerical crap tactic WUWT comes up with to distract people. Like an aircraft throwing out chaff to try and distract a guided missile.

    Apparently if you don’t correct for *everything* the analysis is pointless. Meanwhile skeptics are permitted to correct for *nothing* and concluding CO2 isn’t causing warming. See how these double standards work.

    It’s *them* who insist the Sun has a big effect on climate but suddenly we find they aren’t interested in correcting for the impact of the solar minimum on global temperature in the past 10 years.

    • It’s more like Baghdad Bob trying to build a replica of an aircraft carrier out of cardboard and duct tape to convince everyone that it didn’t actually sink.

  13. Aloa, see my post under the “real warming signal”. It occurred to me that all these guys are doing is calculating Fourier coefficients and multiplying them by “natural cycles” to approximate a linear trend! It’s utter horsecrap!

  14. Hi Tamino (OT), are you on the Liu2011 paper? It’s been on WUWT/Nova, and recently appeared in German Denialistan. They seem to be doing a Scafetta on the data, so it’s clearly your turf :)

    p.

  15. Tamino, there is this paper:

    The Persistently Variable “Background” Stratospheric Aerosol Layer and Global Climate Change
    (Abstract: http://www.sciencemag.org/content/333/6044/866 ;
    PDF: http://junksciencecom.files.wordpress.com/2011/07/solomon-07-22-11.pdf )

    I can extract from it:

    “The satellite observations displayed in the bottom panel of Fig. 2 show increases in stratospheric aerosols from 2000– 2010 of about 7% per year, which implies a change in global radiative forcing (Fig. 3) of about –0.1 W/m2”

    “Figure 4 shows that the observed increase in stratospheric aerosol since the late 1990s caused a global cooling of about –0.07°C compared with a case in which near-zero radiative forcing is assumed after year 2000, as in the forcing data sets often used in global climate models”

    “For the decade from 2000 to 2010, the observed stratospheric aerosol radiative forcing from satellites yields about 10% less sea level rise from thermal expansion than obtained assuming a background near zero as in (26), about 0.16 cm versus 0.186 cm”

    A forcing of -0.1 W/m^2 that make the Earth to “cool” (actually warm less) by 0.07 ºC and slow thermosteric sea level rise by 10% is not a small effect (the warming signal is between 0.15ºC and 0.2 ºC per decade, so volcanic aerosols masked about 30% of the 2000-2010 global warming)

    The result is that the global warming signal should be even bigger than you found in the paper (actually something like a warming 30% bigger, once you remove the 2000-2010 volcanic aerosol cooling).

    What do you think?

  16. Tamino, maybe it’s worth having a look at the new Antarctic warm temp record? It’s only a 55-year record, but the 1.3C difference seems like a lot.

  17. Off topic, forgive me… I have a new summary of the basic argument.

    http://bartonpaullevenson.com/EasierGreenhouse.html

    • Barton, I know that you have a much better grasp on the science than I do, but for what it is worth, here are my two cents on the Easier Greenhouse page.

      It states:

      This works because “greenhouse gases” in the Earth’s atmosphere–mainly water vapor and carbon dioxide–mostly pass sunlight, but absorb infrared light from the ground and elsewhere in the atmosphere. Like asphalt in sunlight, they heat up when they absorb light. They then radiate IR of their own. Some of this goes back to the ground. This “atmospheric back-radiation” is what makes Earth’s surface tolerably warm.

      The more greenhouse gases in the atmosphere, the warmer the surface gets.

      I think this may be a little too simplified. The atmosphere warms, reducing the rate at which the surface is able to lose heat. The surface loses heat due to upwelling radiation, moist air convection and thermals.

      So what you could say is something along the lines of:

      Sunlight warms the surface, then the surface warms the atmosphere. The only way heat can be lost to space is as thermal radiation. Greenhouse gases let sunlight through, but both absorb and emit thermal radiation. And like asphalt or the heating element on your stove, emit more radiation the warmer they get.

      Adding greenhouse gases is like adding layers of insulation to your house. The top layer is always going to be closest to the temperature of the outdoors, but the more layers you add to your house the warmer it is going to be for you. Greenhouse gases help keep the Earth warm enough for life, and increasing concentrations of greenhouse gases will warm the Earth even more.

      Increasing the concentration of greenhouse gases temporarily reduces the rate at which the atmosphere is able to radiate heat to space. The atmosphere warms until it radiates heat to space at the same rate as before. But this in turn reduces the rate at which the surface is able to lose heat to the atmosphere.

      Now sunlight is reaching the surface at the same rate as it did before the added greenhouse gases. Therefore the surface warms up until it is able to lose heat to the atmosphere at the same rate.

      Backradiation is actually a bit player in this story, not the lead.

      The backradiation that reaches the surface comes from only the first few meters of atmosphere, which is itself warmed by the infrared radiation of the surface and is at nearly the same temperature as the surface. I would view the backradiation itself more as an effect of the warming surface, one step removed, rather than the cause.

      If you want to make it easier to understand than this, I would suggest throwing in some insulation, then maybe a little more. Doing so will even begin to set them up for the lapse rate, which would then play a key role in a slightly more advanced explanation. But giving backradiation the role you currently give it seems a good setup for the surface budget fallacy, which then makes your audience more vulnerable to saturation fallacies.

      • Steven Mosher

        Timothy

        Your suggestions are exactly on point. I find that many people who are confused when we argue that back radiation warms the surface, get unconfused when you talk about slowing the rate of cooling.

      • Re Steven Mosher

        Backradiation vs. slowing the rate of warming

        They get confused since when they hear back radiation they assume that the greenhouse effect is some mythical beasty that violates conservation of energy since they can’t see where the total amount of energy is coming from that is reaching the surface, or alternatively, that it violates the second law of thermodynamics because a cooler atmosphere is “warming the surface” implying that heat is going from a something cool to something warm.

      • Steven, are you planning a sequel to your book on Climategate? I mean, now that more letters have been released.

      • steven mosher

        No Timothy I came out on the collide-a-scape early on and announced that I had no intention of reading all of the second batch of mails or doing another book on them.

        From the few mails I have read, I just see some blank spots being filled in. Nothing that changes my position
        1. the mails cant change science. only science can change science
        2. a few people had major lapses in their judgement
        3. The issues can be addressed by improving processes, by accepting criticism, publishing a couple of errata, and improving processes.

        So, if you look at the mails the way I do, you dont see any science over turned. But I dont think it is wise to defend certain behavior. I have no trouble saying: Jones temperature series is correct and his behavior around FOIA is indefensible. The latter doesnt change science, so what is the big deal in speaking the honest truth about it. Will skeptics make noise if you admit the simple truth? sure, they will make noise regardless. I dont calculate my positions based on what they will say or think. That’s the real insanity

      • I have no trouble saying: Jones temperature series is correct and his behavior around FOIA is indefensible. The latter doesnt change science, so what is the big deal in speaking the honest truth about it.

        Because, “Piltdown Mann”, the whole FOIA exercise was intended to harass Jones and CRU, an exercise in character assassination and nothing more.

        It had nothing to do with the science, the CRU temp series, the data.

        As you so freely admit.

    • Gavin's Pussycat

      It’s actually not bad at all… yes it simplifies matters a bit, but not fatally so IMHO. Yes, it glosses over the fact that the greenhouse effect is in reality three-dimensional, but so do nearly all simple explanations.

      …and John Tyndall’s are huge boots to fill: “As a dam built across a river causes a local deepening of the stream, so our atmosphere, thrown as a barrier across the terrestrial rays, produces a local heightening of the temperature at the Earth’s surface“…

      • Gavin’s Pussycat writes:

        It’s actually not bad at all… yes it simplifies matters a bit, but not fatally so IMHO. Yes, it glosses over the fact that the greenhouse effect is in reality three-dimensional, but so do nearly all simple explanations.

        Well, it isn’t that the explanation ignores the three-dimensional nature of the greenhouse effect that is such a problem. It’s that what “warms” the surface is a reduction in the net rate at which energy leaves the surface. And this isn’t just a matter of back radiation. When the atmosphere warms relative to the surface this reduces that net rate at which energy is transferred to the atmosphere not just by means of radiation (through increased back radiation) but moist air convection and thermals, and these other heat transfer mechanisms will play a role pretty much right up to the effective radiating layer.

        Gavin’s Pussycat writes:

        …and John Tyndall’s are huge boots to fill: “As a dam built across a river causes a local deepening of the stream, so our atmosphere, thrown as a barrier across the terrestrial rays, produces a local heightening of the temperature at the Earth’s surface”…

        Tyndall doesn’t say, however, that the dam is just a few meters above the surface — which is afterall where the back radiation reaching the surface is coming from. And remember, at the surface the absorption spectra in which carbon dioxide would operate is already saturated, so focusing on the back radiation plays right into the saturation fallacy. If one views the dam as being at the “top of the atmosphere” (TOA) then the dam is a good analogy when viewing things in terms of radiation.

        But lower than this, where moist air convection and thermals are involved, it might be better to regard the dam as a dam that heightens the barrier to energy through increased atmospheric temperature. The barrier to energy decreases the rate at which heat can be transferred from a surface that is still warmer than but not as warm as the atmosphere itself, and it affects not simply the transfer of energy by means of radiation but the other mechanisms. And one doesn’t have to even mention what those other mechanisms are.

        However, when one speaks of back radiation as warming the surface the linkage between increased greenhouse gas concentrations and increased back radiation breaks down since increasing greenhouse gases at the surface won’t actually result in an increase in back radiation as the relevant spectra is already saturated. Back radiation will increase, but it will increase primarily due to higher temperatures. And speaking of it as back radiation shifts the focus of the source of the energy for such back radiation to the surface, the warming of which is precisely that which one wishes to explain.

        Then there are other issues related to confusion with regard to conservation of energy and the second law mentioned above….

      • Correction

        I stated:

        But lower than this, where moist air convection and thermals are involved, it might be better to regard the dam as a dam that heightens the barrier to energy through increased atmospheric temperature. The barrier to energy decreases the rate at which heat can be transferred from a surface that is still warmer than but not as warm as the atmosphere itself, and it affects not simply the transfer of energy by means of radiation but the other mechanisms. And one doesn’t have to even mention what those other mechanisms are.

        … when I should have said:

        But lower than this, where moist air convection and thermals are involved, it might be better to regard the dam as a dam that heightens the barrier to energy through increased atmospheric temperature. The barrier to energy decreases the rate at which heat can be transferred from a surface that is still warmer than the atmosphere but where the difference in temperature has been reduced, and it affects not simply the transfer of energy by means of radiation but the other mechanisms. And one doesn’t have to even mention what those other mechanisms are.

      • Gavin's Pussycat

        …since increasing greenhouse gases at the surface won’t actually result in an increase in back radiation as the relevant spectra is already saturated. Back radiation will increase, but it will increase primarily due to higher temperatures.

        Timothy that’s not correct. Yes, in the core of the 15 micron band this is true, but not over the whole spectrum: concentration increase leads to the flanks of the band shifting outward, so the total amount of back radiation increases.

        Looking at the radiation balance at ground level is not fruitful, I agree, but still the books are balanced also there :-)

      • Gavin’s Pussycat wrote:

        Timothy that’s not correct. Yes, in the core of the 15 micron band this is true, but not over the whole spectrum: concentration increase leads to the flanks of the band shifting outward, so the total amount of back radiation increases.

        I submit that if you are speaking of carbon dioxide at the surface, then yes, the relevant parts of the spectra are saturated — by water vapor. Alternatively, if you are speaking of water vapor, then increased concentrations of water vapor already presuppose a warmer surface.

        It might help a little if we take a look at what is responsible for different features of the spectra.

        First, there are basically three fundamental modes that should be considered: stretching, bending and rotational. Stretching modes are the most energetic, followed by bending, then rotational. Pure rotational modes are much less energetic and tend to play a role only in the microwave band.

        Carbon dioxide is a linear molecule with oxygen on both ends. The carbon atom carries a positive charge, the oxygen atoms negative ones. Given its symmetric structure, carbon dioxide has no permanent electric dipole. Consequently it has no pure rotational mode. What matters are the stretching modes and the bending mode. Symmetric stretching is irrelevant as this won’t interact with the electromagnetic field. This leaves antisymmetric stretching and the bending mode. The asymmetric stretching is too energetic to play a role in the Earth’s atmosphere, so this leaves the bending mode at roughly 15 microns.

        Given the bending mode, carbon dioxide will have a temporary electric dipole. As such, in addition to the pure bending mode the molecule will have rovibrational modes where integer multiples of quantized rotation are responsible for the additional lines. In contrast, the water molecule has a permanent electric dipole which is the reason why it is active in the microwave band, but like carbon dioxide, it will also have rovibrational lines.

        Now the broadening which you speak of is then with respect to the individual lines. At the surface water vapor absorption bands and carbon dioxide bands overlap. This includes the band around 15 microns. Line broadening affects both, and given the broadening that has already taken place, even in the wings of the lines, there is sufficient overlap that after the first few tens of meters the individual lines are no longer distinct. Given this, broadening of the lines of carbon dioxide at the surface (or of water vapor, for that matter) will be essentially irrelevant — because water vapor lines are already broad enough that they overlap.

        Pressure drops roughly as an exponential function of altitude. Consequently the individual lines become much more distinct at higher altitudes. Therefore even where water vapor still plays a significant role and carbon dioxide bands and water vapor bands overlap, there is much less overlap between the individual lines themselves. Carbon dioxide begins to play a significant role. And given the fairly shallow distribution of water vapor in the atmosphere (it has a scaling height of only about 4 km, if I remember correctly) carbon dioxide plays the dominant role near the effective radiating altiude and above.

        Given this, a fairly good approximation is to assume that increased concentrations of carbon dioxide raises the effective radiating altitude. And given a nearly constant lapse rate (that is, rate at which temperature drops with increasing altitude) and the increased distance between the surface and effective radiating layer, this implies a warmer surface where the temperature is a linear function of the effective radiating altitude.

        Now admittedly this is a bit more technical than what Barton is aiming for. Nevertheless, while I don’t think he wishes to aim for this level of technical detail I don’t think he actually wants to contradict it, either. Back radiation plays a role in the energy balance that exists at the surface, but with increasing concentrations of carbon dioxide or water vapor, this is almost entirely due to increased temperature. The increased altitude of the effective radiating layer and lapse rate are the primary considerations.

        I consider Barton a friend. I value him. So I have been straight with him about what I perceive to be a weakness in his explanation — which leaves it open to easy criticism by “skeptics.” However, I also believe that friends should understand that there comes a time when you have to agree to disagree. It is his article. He is the one who must ultimately decide. And at this point I have had my say.

        Higher up in the atmosphere where water vapor still acts but less line broadening takes place, the absorption bands of carbon dioxide overlaps with water vapor, but there is much less overlap with respect to the individual lines themselves and carbon dioxide begins to play more of a role.

      • PS Last paragraph of my comment directly above was extraneous, from an earlier version. It should have been omitted.

      • The detail here is way beyond the original rubric here, of course, but I’m grateful for such a detailed discussion.

      • A quick correction… I had stated:

        … given the fairly shallow distribution of water vapor in the atmosphere (it has a scaling height of only about 4 km, if I remember correctly)…

        The scaling height of water vapor in the atmosphere is actually closer to 1.5 – 2 km. Please see:

        Deriving the precipitable water
        http://www.vla.nrao.edu/memos/sci/176/memo/node2.html

  18. Tamino, in your paper, did you consider not using the linear temperature gradient, but instead including log(CO2) as another variable along with ENSO, volcanoes and TSI?

    [Response: Didn’t consider doing that, but it might be a very good idea indeed. Then it would be more like Lean & Rind (who used climate forcing rather than linear trend).]

  19. Tisdale takes on Tamino's Foster & Rahmstorf 2011

    Wow. Tisdale doesn’t realize that you need to detrend (eg, include a linear term) in order to be able to analyze how the other components influence the year-to-year variability. The man really doesn’t have any understanding of statistics, does he? Being able to plug numbers into Excel is a starting point, not an ending point, for numerical understanding…

  20. “but instead including log(CO2)”

    The danger of using forcing (or log forcing) is that it doesn’t take into account oceanic inertia. You’d maybe want a 2-box system for the “forced temperature” term… of course, once you start going down that road, then the question is where to draw the complexity line? It would, perhaps, be interesting to perform a simple climate model uncertainty tuning (ala Forest et al., where the tuning knobs are climate sensitivity, ocean uptake rate, and aerosol forcing), then pull out solar and volcanic and see what the temperature would have looked like. ENSO being the challenge to do properly, since any climate model worth its salt will have some ENSO-like properties, and those won’t line up with the real ENSO, and presumably the parameter tuning would be better if one could somehow take ENSO out of the surface and ocean heat records being used for the tuning but also out of the individual model runs… hmm…

    • The other danger is that it only looks at forcing but not at feedback. For example, while CO2 forcing is logarithmic, the water vapor feedback is exponential. Put them together and the result is a lot closer to linear.

      • David B. Benson

        You raise a fine point. However, as chapter 6 of Ray Pierrehumbert’s “Principles of Planetary Climate” makes clear, boundary layer physics plus atmospheric radiative physics means that the watr vapor feedback is not just the Clausius–Clapeyron equation acting alone. Without goling into detail, lets just say the water vapor feedback is nonlinear.

        Still in all, this is something which one might attempt to add to Tamino’s very fine 2-box model. I’ll think more upon it.

  21. For a good time, call Bob Tisdale:

    Tisdale takes on Tamino's Foster & Rahmstorf 2011

    Wherein, our intrepid statistician “proves” that in decreasing solar activity causes the earth to warm! No really! You can’t make this stuff up.

  22. “The danger of using forcing (or log forcing) is that it doesn’t take into account oceanic inertia. ”

    If you take a linearly increasing forcing with a simple model for inertia (a constant relaxation time), you get a quadratically increasing response, as long as the time is short with respect to the relaxation time (I think that the curvature of GCM’s results can be simply understood with that). Oddly enough, the curvature itself is not visible in FR2011. But this may be due to the choice of fitting the temperatures by ENSO+TSI+VOLC + LINEAR trend, minimizing the residuals. When you perform the regression, and subtract the HF components, you get of course only LINEAR trend + residuals – what else ? A possible curvature could be spuriously “hidden” by being incorporated in the ENSO+TSI+VOLC component, and then substracted.

    This is kind of a pity, because the interesting point as stake is precisely the curvature, not the linear trend, and it cannot be tested statistically if you don’t parametrize it . And this is true both to dismiss a possible “slowing” of the curve, AND to determine a possible “acceleration” that could be helpful to distinguish between the models and make more accurate the determination of an acceleration parameter (as in cosmology !). So why not performing again the analysis using a QUADRATIC trend and try to quantify the acceleration term ?

    • David B. Benson

      Not so. Look in any book on senior level linear systems theory. A standard student exercise is to determine the system response to a ramp input with a definite start time. Its been around 50 years ago now, but I still recall the answer and how to determine it.

  23. Well David I think I’m able to solve the equation too :

    write dX/dt + X/tau = At
    the standard method is to write X(t) = Y(t) exp(-t/tau)
    then you get dY/dt = A.t.exp(t/tau)
    giving Y = A ( t-tau)tau exp(t/tau)+C
    thus X(t) = C exp(-t/tau) + A tau (t-tau) = A. tau (t-tau + tau exp(-t/tau) ) if X(0) = 0.

    It’s straightforward to see that for t much smaller than tau
    X(t) ~ A.t^2 /2 (you can get very simply the approximation by neglecting the relaxation term X/tau in the original equation)

    and for t larger than tau X(t) ~ tau A(t-tau) (that is, following linearly the excitation but with a small lag)

    The first regime corresponds to the initial quadratic behavior of the solution.

    • David B. Benson

      I learned to do this exercise as an application of Laplace transforms so your substitution (corrected for a sign error in the exponential) is new to me. Also, the constant of integration for continuity at X(0)=0 is C=A.tau^2 and then all checks.
      That the ‘fillet’ at the very beginning is approximately quadratic is something I hadn’t previously noticed. Thank you.

  24. Something is screwy here (and it may be me).
    Here’s the output I get from the MR for GISTEMP:

    Coefficients:
    Estimate Std. Error t value Pr(>|t|)
    (Intercept) -8.350e+01 2.110e+01 -3.957 9.08e-05 ***
    mei 7.910e-02 7.301e-03 10.834 < 2e-16 ***
    volc -2.369e+00 2.384e-01 -9.939 < 2e-16 ***
    solar 6.132e-02 1.545e-02 3.970 8.63e-05 ***

    And here's what appears in the rates.txt file:

    (Intercept) mei volc solar tau ….
    giss -83.498399 0.079103 -2.369368 0.061322 0.017092
    se.giss 42.294655 0.014634 0.477819 0.030961 0.001591

    You've doubled the standard errors to get 2-sigma values, fine.

    Now compare with the whisker plot in the last figure in your post above. It looks to me like you've doubled the uncertainties again, i.e. you are showing 4-sigma plots, not 2-sigma plots.