New Prediction Paper

Some of my long-time readers have expressed an interest in a recent paper which attempts to predict how global temperature will change over the next few years, A novel probabilistic forecast system predicting anomalously warm 2018-2022 reinforcing the long-term global warming trend (Sevellec and Drijfhout 2018, Nature Communications). It has received quite a bit of publicity, featured in numerous newspaper articles, along the lines of “the next five years will be extra hot.”


They begin with global temperature data, then attempt to remove the forced component by regression against greenhouse gases, volcanic eruptions, and aerosol emissions. Their prediction system attempts to forecast what is left over, the residuals from this fit. Essentially, it’s an attempt to predict the internal, natural variability of temperature time series. They do so for two metrics: global mean surface temperature (GMT) from NASA, and sea surface temperature (SST) from NOAA (ERSSTv5).

I have grave doubts about the validity of their predictions. There are three main reasons. First, I doubt that their regression to remove the forced component is nearly precise enough to do the job. Second, I am not impressed by the “showpiece” they use to tout its predictive skill, the claim that it predicted the “hiatus.” Third, what they’re attempting to predict is the short-term fluctuations, which are dominated by the el Niño southern oscillation (ENSO), but many have tried to predict that with very sophisticated methods and so far, it defies forecasting. I also have some complaints about how this paper is written.

First things first: here’s the graph of their estimate of the forced component:

The red line is annual average, the blue line 5-year averages. I took a very close look at their estimate of the forced change from 1991 to 1992, which includes the strongest effect of the explosion of the Mt. Pinatubo volcano. As nearly as I can tell their forced component shows a drop in annual average temperature that year (as it should), but of only 0.06°C.

My own estimated is based on multiple regressoin of temperature against volcanic eruptions, el Niño, and solar variations. It enables me to estimate the forced component without el Niño, which represents just the Mt. Pinatubo volcano and solar variation. Solar variation is small by comparison, so it’s dominated by Mt. Pinatubo. My numbers say that the forced component shows a temperature drop in that year of 0.21°C, more than three times as much.

As I see it, their estimate of the impact of Mt. Pinatubo is far too small, and so is their estimated impact of the el Chicon and Mt. Agung volcanoes. Hence I believe their estimate of the level of short-term variation due to known forcing factors is too low. This means that when they attempt to predict the natural variation from the residuals, their estimate of what they’re actually predicting is off. That bodes ill for forecasting.

I also think they have made way too much of the claim to have hindcast the “hiatus.” They define the “hiatus” as “the post-1998 decade cooling seen in GMT anomaly.” But researchers who have looked for it have walked that back, with the purported “hiatus” not starting until 2003, and even their claims have been effectively refuted. Yet they make no reference to the now-considerable literature refuting the whole idea. There was no post-1998 hiatus, it was simply an el Niño outburst in 1998 followed by smaller random fluctuations, and their graph doesn’t seem to me to show any skill in forecasting those fluctuations.

Here it is:

This shows their residuals after removing their estimate of forced variablity as a black line, with 1998-2008 (the purported “hiatus”) highlighted in blue, and their hindcast (red circles) of what would have followed after the 1998 el Niño outburst (red square, which is not a hindcast but an observed value).

If you look at their red circles, the forecast based on the 1998 el Niño outburst, it really doesn’t show any meaningful decline. There is a high point in 1999, which is contrary to the observation for that year, followed by a tiny rise of the prediction rather than cooling. But even that is well within the error range. It seems to me that their post-1998 result really just shows “regression to the mean,” it doesn’t pick up the post-1998 fluctuations at all. Yet because of the 1998 outburst, it all looks like a decline, which, when added to the forced component, looks like a “hiatus.” My conclusion is that they really didn’t get the purported “hiatus” at all, and that what they call a “hiatus” is no more real than regression to the mean plus random (and in this case unpredicted) fluctuations following a strong 1998 el Niño outburst.

It’s quite difficult to tell some of the details of exactly what they’ve done. For instance, every graph shows yearly averages or multi-year averages, and one might well suspect their whole analysis is based on annual-average data. But it’s hard to believe they would do so rather than use monthly-average resolution, especially since regression to remove forced components includes a lag in the effect and a 1-year time scale isn’t enough resolution to get the lag right. My analysis tells me the lag for volcanic forcing is 6 months, which is hard to mimic with 1-year resolution. I could ask them for their computer code (which the paper says is available on request), but frankly I’m not sufficiently impressed with this forecast system to motivate me to put in that effort.

The bottom line is that I suspect their estimate of the forced component is poor, which makes their estimate of the natural variation poor, which makes their forecast problematic, and that their talk of reproducing the non-existent “hiatus” is simply wrong.

I also suspect they might turn out to be right about the next five years being hotter than expected (above and beyond the continuing global warming trend). If so, I would hesitate to attribute that to skill of their model, it seems to me more likely to be a lucky accident. Although, considering the harms we’re seeing now from global warming, it should be called an unlucky accident.


This blog is made possible by readers like you; join others by donating at My Wee Dragon.


Advertisements

7 responses to “New Prediction Paper

  1. well, it’s going to be hot, then get hotter. something to do with CO2, the rest I am not sure about.

  2. When that five year prediction came out and was touted all over social media and the few MSM stories I saw, I felt as though it was an attempt to quell the fears of the masses, as if to say, “Look, it’s going to be hot these next 5 years, but not to worry. Just be patient and soon enough everything will go back to normal.”

  3. As you well know, simple partitioning in multiple regression removes the signal plus any error variance correlated with the signal simply by chance. Once you partition out several variables, that just plain takes a lot of variance out with it and doesn’t leave a lot of potential additional signal left to work with in many cases.

    Not sure how much this affects what they’re doing, but I don’t see them discuss or control for this issue at all.

  4. Offhand I’d say the way to corroborate or counter their conclusion is take the data and go at it a completely different way. That’s a tall order, but it could be done. It might be nice if it were done in a very non-parametric, empirical dynamic modelling kind of way.

    I took a short time to look at their methods and its clear I would need to spend a good deal more to understand what they are doing with their transfer operator and grasp what exactly it captures.

    A reckoning of the accuracy of their prediction ought to be done quantitatively, too. For example, if it ends up being 30% hotter than they predict, that ought to be rated a failure just as 30% cooler would be.

  5. Thanks for your thoughts on this. The paper brought to mind the ‘extraordinary claims’ adage, with its corollary, but it’s not an easy paper to grasp. (And apparently not just because of the technical content!)

  6. Thank you for the analysis. My initial bias was to discount this paper, having seen a number of near-term prediction papers fall by the wayside (e.g., the Keenlyside initialization paper). Looking at the paper, my opinion of it actually increased, but my opinion of the spinning of the paper decreased.

    Pro: The application of their method to 10 GCMs. A frequent critique I have of papers based on observational data is whether they would work given a model system where we “know” the answer (e.g., climate sensitivity estimates based on the last 100 years data).

    Middle: Figure 3 is in my opinion the key figure that shows what the paper has actually accomplished, which is to be able to substantially narrow an 2 SD uncertainty of about 0.25 degrees for noise around a forced trend on the 1-2 year timescale.

    Cons:
    1) I agree that the ‘hiatus’ prediction was unnecessary and IMO a drawback to the paper.
    2) I think the press (and, to some extent, the paper) way oversell the ability of the method to narrow the distribution both in general, and for the next 5 years in specific. Looking at Table 1, the probability of the 4th year being warmer than the forced trend is all of 72%, and that’s the strongest prediction. Based on the percentages, the probability of all 5 years being warmer than average is only 11%. Which, to be clear, is indeed higher than the 3% naive projection, but, still, is not what I’d get from reading the news about this paper…

  7. Whachamacallit

    I had an inkling that they may be right. But I wonder if that has more to do with the fact that as long as there isn’t a chain of several la ninas like during the “pause”, the next few years won’t see the air temperature nearly as depressed as that prior period.