Temperature Prediction: the next few months

Let’s have a little fun, and predict the global average temperature (land+ocean) for the next few months. We’ll base the prediction on GISS data, so this will be a prediction for the upcoming temperature according to GISS.

For those interested in the gory details (those who just want the numbers can skip this paragraph), I’ll model GISS global land+ocean temperature as a function of MEI (the multivariate el Nino index), volcanic forcing according to Amman et al., solar forcing (represented by monthly sunspot numbers), a linear trend in time, and a “residual annual cycle” which I’ll model as a 2nd-order Fourier series. I’ll allow for a lag in the influence of MEI, volcanic forcing, and solar forcing. I’ll fit the model using observed data from 1975 to the present. I’ll further model the residuals as an AR() process.

The model gives an excellent approximation to the observed temperature data:

The model as a whole explains 76% of the variance in global temperature since 1975. It’s worth noting that the linear time trend in this model (which is an approximation of the influence of man-made global warming) is 0.0172 deg.C/yr.

Using the model to forecast the next 3 months gives this:

For those who want numbers, the predictions are (all error ranges are 95% confidence intervals):

August: 0.53 +/- 0.21
September: 0.59 +/- 0.22
October: 0.57 +/- 0.23

Place your bets!


65 responses to “Temperature Prediction: the next few months

  1. I assume 3 months is your model’s upper limit for forecasting because that’s the length of your shortest lag. What are your lags?

    [Response: Correct. 4 months for MEI, 9 months for volcanic, 3 months for sunspots number.]

    Also, I would like your model better if you replaced the linear time component with a component that was proportional to the logarithm of CO2, even if it’s just the log of the average CO2 over the past 12 months.

  2. arch stanton

    If the odds are even, I’d bet with you Tamino.

    – What factor causes the dip in September?
    – Since the error bars only increase ~1/100 degree/month over the period shown why not carry out predictions farther than 3 months and increase your odds of being *right* (by reducing “noise”)?

    [Response: The model has a 3-month lag for sunspot number, so I can’t use it to predict more than 3 months ahead.]

    – What’s the difference between a prediction and a forecast?

    [Response: For this post, you can use the terms interchangeably.]

  3. arch stanton

    Check that – “Dip in October”

  4. Here’s one bet (not endorsed by this commenter):
    “August Global temps will start to fall and should be down to .2 above normal from .37 in July. Can see it on Dr Maues sit (link follows)”

    • According to Roy Spencer’s site, the August anomaly is +0.33C. That’s only a drop of 0.04C from July. Joe’s really not having a good year for global temperature forecasts. It looks like in Bastardi’s “duel” with the UK Met Office (UK Met +0.24C in 2011 against 30-year mean according to JB, JB +0C), the UK Met prediction is looking pretty good so far.

      • Don’t you see? That means that temperature is in a free-fall. It’s the beginning of 20-30 years of cooling! My prediction was correct in the essential point that it’s not warming! /Bastardi

  5. I am concerned you may be inadvertently giving people the wrong idea of what the beast usually called a “climate model” is like.

    • David B. Benson

      mtobis | August 20, 2011 at 8:46 pm — This is a climate model, just not a big GCM.

      • David B. Benson, I think mtobis’ point is that there is a great deal more physics built into the big GCMs whereas this model is going off of a little statistics with the Fourier analysis and best-fit lags. However, but to split the difference… Essentially what Tamino is going off of is an empirical relationship with a fair amount of data behind it. What it reminds me of in part is some work that Gavin Schmidt was doing a while back.

        Gavin was analyzing gases given off by organic material in the ocean, partly as a function of churning, if I remember correctly. In place of the ocean he had a tank where everything was well-measured. Similarly there are representative species of plants that stand in for a wide variety of plants, where we know through observation and measurement how the representative plants respond to light, moisture and temperature.

        These are the new Earth Systems Models that incorporate elements of the carbon cycle and biology in addition to more traditional physics of radiation transfer theory and fluid dynamics. But these new elements are nevertheless empirical relationships observed and validated with respect to local phenomena, then applied over a three dimensional polar coordinate grid with perhaps 40 layers of atmosphere and 40 layers of ocean, with column cross sections that are 2° x 2° and time increments that are perhaps a quarter of an hour long. Give or take.

        What Tamino is doing is essentially a one-dimensional model with almost no physics. A toy model, but not really that different from a zero dimensional radiative transfer model or a two box model of the carbon cycle or heat transfer, except insofar as the “physics” is roughly comparable to the empirical relationships observed at a local level involving the carbon cycle or biology. He did say, though, that he wanted us to have a little fun.

      • Gavin's Pussycat

        Nominally yes, practically Michael has a point.

        The big GCMs are based on physical modelling, which this is not. OTOH the big GCMs also parametrize a lot of things (due to lacking spatial resolution, e.g., they cannot resolve individual clouds — not to speak of individual droplets), which is like what tamino is doing, which is all parametrization.

        So yes, the difference is one of degree, but it’s a huge amount of degree.

      • No, this is not a physical simulation of the atmosphere – what we’d usually call a “climate model”. These physical simulations come in all shapes and sizes:

        – energy balance models;
        – 1-dimensional modeling of a vertical slice of the atmosphere (including radiative/convective transfer of energy);
        – 3-dimensional global circulation models which model parcels of the atmosphere (finite element analysis), and attempt to solve numerically the state equations (Navier-Stokes equations);
        – Atmosphere-Ocean GCM’s which model ocean mass/energy transport as well;
        – fully coupled AO-GCM’s which include simulations of the carbon-cycle on land and in the oceans (and other chemical/biological cycles).

        This most advanced category (“Earth System Models”) is increasingly being used for the CMIP5 archive of simulations for IPCC AR5. All GCM’s are used to calculate ‘ensembles’ – many many runs to investigate the response of the model to slight changes in starting conditions & forcings.

        There is nothing wrong with a purely stochastic approach like Tamino is using, but it rests on the assumption that past trends and cycles (challenge-response correlations) continue unchanged into the future. For shorter time periods I would most certainly put my money on Tamino’s predictions.

      • It is a “statistical model”. It is strictly based on correlation, unlike physical models – GCMs – that are based on causation. The use of the word “model” is appropriate in either case.

      • Or a “regression model.” It’s not strictly statistical (or based on correlation) either since we know from physics that volcanic, solar forcings, etc., directly affects the climate. Perhaps not as informative as a GCM in a sense it doesn’t reproduce the entire Earth’s climate from scratch. But it definitely shouldn’t be lined up equally with correlation studies with dubious causality (“born in Massachusetts makes you smarter!”). This model provides clarity by breaking physical attributes into simple, tractable components — informative, in a different sense.

  6. How well does the model perform at forecasting (at least) the last three observed data points?

  7. Horatio Algeranon

    That sure looks like net global cooling from here on out.

    Better hide the post…and fast.

    if Horatio’s calculations are correct, the “stupidity lag” — time between posting of a legitimate (though not very meaningful) statistical result and (really) far-reaching conclusions drawn from it — is only about 3 nanoseconds.

  8. Here is my prediction and method. I’m going with the CPC El Nino Forecast, which suggests a very similar year to 2008.

    The GISS monthly data for 2008 is
    J F M A M J J A S O N D
    18 26 66 43 41 34 52 34 53 56 58 47

    For 2011 so far (*and forecast)
    45 41 56 54 43 51 60 44* 63* 66* 68* 57*

    The forecast is simply 10+ what happened in ’08, which is the average difference between the two years up until this point.
    The linear relationship between the two years up to this point can explain
    55% of the variance.

    Average for 2011 is therefore 54 (or 0.54 deg C above average) and will be the 9th warmist year on record.

  9. Tamino,

    do you think including an extra autocorrelation is worthwile?


  10. Gavin's Pussycat

    Tamino you could easily extend the prediction to nine months: MEI could be predicted by persistence (i.e., propagating the last value, or the average of the last few months, into the future), and the sunspot number by extending the linear trend of the last few months. It would still be better than guesswork.

  11. I guess that a prediction with a minimal information, i.e. the current temperature +/- the standard deviation (computed for instance over one year), without any other climate model, would have pretty much the same predictive power ?

    in other words, does a climate model help significantly reducing the uncertainty, or not ?

    [Response: The influence of exogenous factors (el Nino, volcanism, solar activity) turns out to be both strong and statistically significant. So, a model based merely on “persistence” won’t perform as well.]

  12. All I can say is, “If the curve fits, share it!”

  13. Might I note that sunspot are not a random process. They are hard to predict but a few month in advance estimate should be pretty good. However, then you will ran into ENSO, which id not better.

    By tehe way, how good is your model compare to persistance?

  14. How much? This si the question.

  15. David B. Benson

    To various critics of the term “climate model”: a statistical model is still called a model. If you inssits, call this a statistical climate model.

    • It’s definitely a model, there’s no doubt about that.

      But I don’t see much that can be called “climate”, other than the long term 1.7 deg C/century trend. 3 months is not a climatic time scale by any definition I’ve ever seen.

      It’s much closer to a global weather model.

      By the way, here’s what you get if you simplify the model down to just the 1.73 deg C/century trend:

      August: 0.62 +/- 0.29
      September: 0.62 +/- 0.29
      October: 0.62 +/- 0.29
      November: 0.62 +/- 0.29
      December: 0.62 +/- 0.29
      January: 0.62 +/- 0.29
      February: 0.63 +/- 0.29

      Including the other factors brings the range of uncertainty down about 24% (from +/- 29 to +/- 22).

  16. For the fun of it I tried to make my own version of tamino’s forecast model. Mine is based on the same data except I replaced his linear time trend with NOAA’s seasonally corrected monthly CO2 data (ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_mlo.txt). I also used a weighted average of the last 7 months of MEI, volcanic activity (all zero of late because I don’t have up to date data), and sunspots.

    In order to forecast into the future I extrapolated the recent CO2, MEI, and sun spot data.

    rest of 2011 forecast :

    August: 0.53 +/- 0.23
    September: 0.55 +/- 0.23
    October: 0.59 +/- 0.23
    November: 0.61 +/- 0.23
    December: 0.62 +/- 0.23

    If this is correct it will make 2011’s average temperature anomaly about 0.53, which would be the 9th warmest on record, but still very warm considering the fact that it’s a la nina year.

    However, it also predicts that 2012 will be a record breaker even if we have a neutral ENSO (GISS 2012 ave T = 69 +/- 10 for 2012 ave MEI = 0.00).

  17. So… I’m lost as to the point of this post.
    Isn’t 3 months just weather in climate timescales?

    [Response: The “point” is stated in the very first clause of the very first sentence.]

  18. Tamino, I’ll bet agin you. Me thinks the next 2 months will be slightly up, then boom, the bottom will drop out by probably a tenth of a degree by the third month.

    It appears the ENSO has just returned to nina conditions.

  19. Robert Murphy is correct about NOAA’s latest update, but it does seem like La Nina may be making a return. The latest Nino 3 and Nino 3.4 SSTs are 0.5C or more below normal, and the models seem to be trending colder. Even so, I doubt we’ll see much ENSO impact on global temperatures by October. As isotopious pointed out, we may be having a similar ENSO evolution as in 2008-2009, and global temperatures did not drop in October or November of 2008. Even if a weak or moderate La Nina returns, I doubt that we’ll see the -0.25C global temp anomaly (UAH-based) that Joe Bastardi predicts. Over the next three months, I cannot think of an adequate justification for going against Tamino’s predictions.

    As an aside and on the topic of El Nino, I wish that James Hansen had not indicated a high probability that we would see a strong El Nino in 2012: http://thinkprogress.org/romm/2011/03/29/207781/nasa-james-hansen-sure-bet-decade-warmest-in-history/
    I did not think it seemed likely at the time, and even though it has nothing to do with climate change, it just gives the fake skeptics ammo for discussing how bad Hansen’s predictions have been, even though his most meaningful predictions have not been bad.

    • A prediction of a -0.25 monthly anomaly (UAH) seems implausible at this point. Unless there is a return to strong La Nina conditions over the next couple of years I think there is a good chance that sub-zero anomalies will be entirely absent from now onwards, until the next UAH baseline change at least.

      Bastardi’s August prediction of a drop to +0.20 is probably about right looking at the UAH dailies – http://discover.itsc.uah.edu/amsutemps/execute.csh?amsutemps (choose Channel 5 and compare 2011 to 2010).

      Regarding Hansen’s predictions, I think a little too much time is wasted worrying about how “skeptics” are going to react. If Hansen were to change his behaviour in order to dodge “skeptics” he would be effectively allowing them to dictate his actions.

  20. Hmmm, .5* below normal, is that not the starting mark for La Nina?

    Anyway, my guess is not the least bit scientific or even logical. Mostly just having some fun and betting against the house. I like long shots when I’m at the track too!

  21. I understand the model can only project 3 months into the future because of the 3-month sunspot number lag.

    How is the lengthy match of the first graph calculated? Is each 3 month period calculated from the data immediatly before it?

    Sorry if this quite basic question has already been answered above.

    [Response: Each month’s model value is computed from the sunspot count 3 months previously, volcanic forcing 9 months previously, the MEI 4 months previously, and a linear trend.]

  22. Hey Tamino,

    I’ve just been pondering the observed changes to incoming and outgoing infrared radiation spectra, which as described at Skeptical Science here, provide empirical proof of the enhanced greenhouse effect and of the amount of forcing it imposes on the climate system. They have about 26 years of satellite data measurements of the outgoing spectrum. I’m not sure how long ground measurements of the downward spectrum have been done for. The analyses that have been undertaken seem to be basic calculations of the difference that have occurred between now and when measurements commenced. I wonder if this data is ammenable to some form of trend analysis that can perhaps be presented as plots. For example as plots of the trend in forcing over time. Perhaps the data is too noisy – otherwise it seems to be an obvious thing for someone to be doing.

  23. “I’m not sure how long ground measurements of the downward spectrum have been done for.”

    A matter of definition, I suppose–I’d say, based upon your use of the word “spectrum,” that we should probably go back at least to Samuel Langley’s bolometric observations. For useful data, though? “Several decades.” See the BSRN website for station record information on that.

    The link, along with much historical context, can be found here:


  24. Michael Stefan

    Re: comments about La Nina

    The latest ENSO monthly update has a “La Nina watch” in effect and it appears that the current trend, as noted, is heading in that direction. Also, as mentioned in their weekly updates, atmospheric circulation anomalies are still in a La Nina state, having never really gone away from earlier this year, even when the MEI went to neutral. I think it is likely that we will see a repeat of 2008-early 2009 for ENSO, with global temperature probably tracking similarly but warmer.

    Also, on Hansen’s prediction of a strong El Nino starting this summer, he apparently based it on subsurface anomalies, as in the development of a large pool of warm water, which occurs as La Nina weakens. 2008 had a similar progression – but so did 2009 – but the outcomes were very different – and unpredictable; NOAA made no mention of an El Nino in May 2009:


  25. Arctic temperature series (north of 66N and 80N) from ERA INTERIM

    • Thanks for this.

      I assume those are average temperatures, in degrees Celsius, for a given latitude at a given time.

      I decided to convert the 80 deg N data to temperature anomalies and calculate the rate of temperature rise. I got a slope (1979 to 2010) of +0.14 (deg C?) per YEAR, almost all of which is occurring in the winter (monthly slopes):

      Jan = +0.20/year
      Feb = +0.19/year
      Mar = +0.13/year
      Apr = +0.20/year
      May = +0.12/year
      Jun = +0.04/year
      Jul = +0.004/year
      Aug = +0.03/year
      Sept = +0.14/year
      Oct = +0.22/year
      Nov = +0.22/year
      Dec = +0.19/year
      Annual = +0.14/year

      That’s a really, really alarming rate of warming. So alarming that I want to know just how reliable the data source is before I draw any strong conclusions. I know ERA is generally top class, but the source website is in Polish (I believe) … so I can’t evaluate who they are.

      • Well, that is timeseries from original ERA Interim data downloaded from ERA-Interim site:
        I’ve checked all times, only first step (0) and 2m temp. Then I downloaded the GRIB file and wrote GrADS scripts to get values for north of 66N and north of 80N. For some reaseons GrADS aave function:
        gives some strange results and must be set to:
        Also there are maps on my webpage:

        Anyway i think that would be a great idea if Tamino can check these results (there is possibilty to download netCDF file and import to R as I think).

      • Well, because it was really alarming rate, i’ve check my script again and found that i forgot to change aave function for second and third GRIB file (unfortunetaly it is impossible to download all data series to one file).

        TS File is corrected now. Now slope is much lower and equal to 0.064 / year (0.25 for last decade). My mistake.
        Maps are ok and was not affected. Example:

      • For moderator:
        Maybe it’s quite good idea to merge my comments into one:

        Well, that is timeseries from original ERA Interim data downloaded from ERA-Interim site:
        I’ve checked all times, only first step (0) and 2m temp. Then I downloaded the GRIB file and wrote GrADS scripts to get values for north of 66N and north of 80N. For some reaseons GrADS aave function:
        gives some strange results and must be set to:
        In first case we get average for lon=0,lon=180,lat=66,lat=90.
        Also there are maps on my webpage:

        Because it was really alarming rate, i checked my script again and found that i forgot to change aave function for second and third GRIB file (unfortunetaly it is impossible to download all data series to one file).

        TS File is corrected now. Slope is much lower and equal to 0.064 / year (0.25 for last decade). My mistake.
        Maps are ok and was not affected. Example:

        Anyway i think that would be a great idea if Tamino can check these results (there is possibilty to download netCDF file and import to R as I think).

      • Thanks pd. New monthly trends for N of 80:

        Jan = +0.070/year
        Feb = +0.074/year
        Mar = +0.014/year
        Apr = +0.012/year
        May = +0.074/year
        Jun = +0.020/year
        Jul = +0.004/year
        Aug = +0.007/year
        Sept = +0.067/year
        Oct = +0.10/year
        Nov = +0.10/year
        Dec = +0.08/year
        Annual = +0.062/year

        That’s a lot closer to what I would have expected.

      • Interesting to ponder those numbers for a bit. I (think I) see two factors:

        1) the well-known fact that warming is most marked in winter (which is reflected in the very low numbers for July and August, and the fact that the four highest trends occur from Oct. through February); and

        2) the greatest change in albedo, due to the lengthening melt season, occurs in the transitional months of September and May (which is reflected in ‘bumps’ in the trend then–though May is more properly called a ‘bump’ than September, which could be termed a ‘ramp’ instead.)

        I’m sure there is more sophisticated analysis that could be done from this point of view. (OK–“actual analysis.”)

  26. Not the next few months, but the next few (almost) years…

    I am going to stick my neck out and guess that within 5 to 10 years there’ll be some consistent record minima set for the May-June period, presaging some significant new overall summer minima following hot on the heels…

    And whilst this is happening, Steve Goddard is likely to claim that the data are phony, and that there is no ice melt occurring…

  27. @BernardJ:

    I just read that post of Goddard’s that you link, where he claims that Irene was merely a tropical storm when it went ashore in the carolina’s, and that there must be some kind of conspiracy to inflate the storm, because he found winds of only 30 knots at a surface statin there. And the comments, blaming the conspiracy on Obama. Gads.

    While I was reading it, I was also watching coverage of Irene in Long Beach, a day later and much weaker than when Goddard claimed it only had 30 knot winds, watching a reporter hit by a gust of wind that sent him sliding down the boardwalk.

    The idiocy, the lack of even basic knowledge, and the cock-surednes of their pronouncements, is simply astounding. It really is.

  28. Ernst K,, those numbers are quite likely correct as the humidity over there (80N is on top of the Arctic Ocean) is likely very high during late autumn. I think it’s the water vapor feedback you’re seeing there.

  29. Goddard and co, apart from being either stupid, deluded or dishonest, are the merchants of hindsight.

  30. I think on Goddard’s site some of the posters were confusing their IQ with the wind speed.

  31. I wonder why no-one is betting through betting shops on matters like this – I recall a torrent of global interest in Ladbooks ‘big bird race’ in 2004 – are there no betting shops prepared to take bets on climate change? Is it too dull to bet on? Must we take our guesses/analysis to obscure websites such as Taminos …

    So – my 2c of uninformed guesswork says:

    August: 0.52
    September: 0.58
    October: 0.60

    Are there odds?

  32. skythorn,

    Intrade has a set of Climate and Weather contracts, including ones on the monthly GISS anomalies. For 2011, the contracts on offer are for whether those will exceed 0.65 Degrees C.

  33. The sad thing is, I come here for some really meaty analysis, and sometimes I have to wait weeks for it. But when I go to wtfwt and its ilk, they have new stuff every day. Its so unfair that its easy to produce crap in large quantities, but hard to produce good stuff!

  34. The sixth law of uncle Henk:

    The rate of inventing and spreading rubbish is always larger then the rate of gathering data, analyzing, publishing science and debunking nonsense.
    The difference between both rates is proportional to the mean speed of communication.

  35. The regression values I calculated for the anomalies with an AR(1) residual model were with +/- 2 sigma:

    Aug: .53 +/- .24
    Sept. .60 +/- .24
    Oct: . 62 +/- .24

  36. Thought I would mention this,
    Editor of Remote Sensing resigns for publishing spencer and bradwell (2011)


  37. Aug GISS temp anomaly has come in at 0.61 close enough to the prediction of 0.53. I had a couple of predictions at 54 and 49 so that beats me even if both are well within error range.

  38. Actuals Aug 62, Sep 48 Oct 54

    The pattern is a bit different but all three within 1 SD