Models

Now that global temperature has skyrocketed, talk about a “pause” of global warming only embarrasses the deniers who so craved it. Hence they’ve switched — again. Their general strategy is to search long enough and hard enough to find one thing that looks like it’s a sign against warming, which is easy even in a warming world; random fluctuation alone ensures there’s always at least one thing that’s bucking the trend. Temporarily, that is; because global warming is real, those fluctuation-induced signs don’t last. When the one deniers have been crowing about (like the so-called “pause”) turns out not to be, they switch to a different talking point.


Lately they want to focus on computer models, in the vain hope that by discrediting them they can discredit all of climate science. To hear them talk, computer models which simulate climate are completely wrong about everything, especially temperature, without a hope of a clue of how fast Earth is heating up, or will heat up.

The results of computer simulations are shared via CMIP (the Coupled Model Intercomparison Project), and global average surface temperature from the latest versions (CMIP5) can be downloaded here. There are lots of model runs included, some of which only simulate what we’ve already seen historically. To simulate what hasn’t happened yet, we’d need to know what future greenhouse-gas emissions will be like (and other things too, but mainly greenhouse gases). These are different for different computer simulations, but generally follow one of several possible “representative climate pathways” to cover a range of possibilities — we might get our act together and reduce emissions quickly, or we might keep burning fossil fuels rapidly in a “business as usual” way. The main representative pathways used for model runs are, from least to most carbon emissions, RCP26, RCP60, and RCP85.

Let’s compare the model results for RCP85, the high-emissions scenario, to observed temperature data from NASA. One of the important factors is that the models return surface air temperature, or SAT, while the observed data are a combination of air temperature over land and sea-surface temperature. This means that observed data underestimate the changes to SAT. However, NASA, in addition to publishing a land+ocean temperature index, also publishes an index based on meteorological stations only. So, we’ll compare model resuts for SAT to NASA’s data for the land-ocean temperature index (“loti”) and for meteorological stations (“met”), with the expectation that true SAT is somewhere between NASA’s two indexes.

I’ll limit the model results to the time span from 1880 to 2020, since observed data from NASA only go from 1880 through April 2016. There’s also the issue of how to align the data, i.e. what baseline to use for computing anomalies. This is something that can be abused to make comparisons misleading, especially if the baseline period is so brief that the records aren’t properly lined up. I’ll use the entire 20th century, which I define as January 1900 through December 1999, as baseline for both NASA data and for model results.

I managed to acquire monthly global temperature for 93 model runs for the RCP85 scenario. I’ve plotted them here in gray, together with the NASA monthly loti in red:

rcp85_loti

That alone shows that the models do a pretty good job estimating temperature change, including recently (when they’re forecasts rather than hindcasts). We can also see this, with a bit more clarity, by plotting yearly rather than monthly averages (but be advised, the final average has only 4 months’ data, not a full year):

rcp85_loti_1yr

There’s still the issue that NASA’s loti underestimates the increase of SAT. A similar comparison of model results with NASA’s met-station index (which overestimates the increase of SAT) shows this:

rcp85_met

Annual averages look like this:

rcp85_met_1yr

Clearly, contrary to what the deniers want you to believe, the models taken as a group have not overestimated global warming. Yet for some reason, this patently false claim is one of their loudest and most frequent.

We can also compare recent trends from the models to those from the observed data. Here are trends since 1970 (with 2-sigma error bars) for all 93 model runs, compared to the trends for NASA data shown as horizontal lines (loti in blue, met-stations in red, 2-sigma error ranges as dashed lines):

trend1970

It’s no surprise that not all models give the same results. Some warm faster; the cluster of 6 in a row with high trend rates since 1970 are all runs of the CanESM2 model, and there’s one high rate from GFDL.CM3. Some warm more slowly, especially inmcm4 and MRI.CGCM3.

For more recent trends, we can compute rates since the year 2000:

trend2000

Again, there’s plenty of variation among the models. Yet again, there’s no evidence at all that, taken as a group, they’re contradicted by observed temperature increase.

When deniers claim that models are useless and wrong, they’re not telling the truth. But there’s not much they can talk about these days. Surface temperature is through the roof, satellite temperatures are through the roof, ocean heat is through the roof, wildfires are burning down the roof, sea level is lapping at your ankles and still rising, Arctic sea ice is crashing through the floor, glaciers are vanishing before our very eyes, Greenland is melting at an alarming rate … When it comes to what’s happening with Earth’s climate, deniers have nothing to talk about that won’t embarrass their “don’t worry” narrative. Things have gotten so bad even Ted Cruz has shut up.


This blog is made possible by readers like you; join others by donating at Peaseblossom’s Closet.

30 responses to “Models

  1. OK, now who is going to clean up all the guts of the denialists you just eviscerated!?!

    Cue moving goalposts in 3, 2,1. Duck!

    • They’d be like the Black Knight in Monty Python and the Holy Grail.

      BK: It’s just a scratch.
      KA: A scratch?! Your guts are on the ground!
      BK: No they’re not. I’ve had worse. No evisceration since 997.

    • If they had real guts, they’d be facing up to reality.

  2. “wildfires are burning down the roof”.

    Wildfires are also a climate feedback and not counted in climate model predictions. I have a reply from the UK Department of Energy and Climate Change which includes:

    “the models used vary in what they include, and some feedbacks are absent as the understanding and modelling of these is not yet advanced enough to include. From those you raise, this applies to melting permafrost emissions, forest fires and wetlands decomposition.”(http://ow.ly/d6Fz300ibH1)

    How important are these missing feedbacks?

    How much do they diminish the “remaining carbon budget”?

  3. Zeke Hausfather

    You can also get blended SAT/SST model fields from our Cowtan et al paper if you want to do an apples-to-apples comparison: http://www-users.york.ac.uk/~kdc3/papers/robust2015/methods.html

    • Zeke, It is not easy for a layman to make a fully apples to apples blended comparison, since it requires the use of variable sea ice masks for every single model run. The observational blended indices have commonly SAT over sea ice inferred from land stations, whereas the standard model SST output (tos) is with SST under sea ice.

      This problem can be circumvented by not using SAT over sea Ice, but rather SST. One of the BEST land/ocean versions does this. Here is a comparison ending in Dec 2015:
      https://drive.google.com/file/d/0B_dL1shkWewaeHZvRVBwWFhLbHc/view?usp=docslist_api

      It is possible to do a similar blend with the Gistemp loti components. The standard ERSSTv4 ouput is with SST under sea ice, which makes it quite easy. Here is a comparison including April 2016:
      https://drive.google.com/file/d/0B_dL1shkWewaVEhJdWVpc0ozVmM/view?usp=docslist_api

      These two charts also show the approximate updated forcings suggested by Schmidt et al 2014. I have just copied the change from a chart in that paper, not rerun all the model with new forcings :-)

      Another simplification is that I dont use ensemble confidence intervals, but rather a ” fair” interval of +/- 0.2 C around the model mean. It is much easier to download one model mean than to download 100 model runs and make spaghetti graphs and calculate CI:s ( at least when you are confined to spreadsheets only)

  4. As with any in vogue denier talking talking point

    When pushed for supporting evidence they simply play their “joker”

    The conspiracy card – it is the same as creationist invoking the bible in a science debate

    Both unarguable positions, so it is usually the end of the debate – which they take to mean they win

    • Tadaa,

      That they take the conspiracy card killing argument to mean they’ve won may seem stupid. But their real stupidity is demonstrated by the projection involved in the conspiracy argument.

      That projection of stupidity is that they think the rest of humanity are stupid enough to fiddle the stats on an underlying physical process. Sooner or later reality will prove the liar wrong. It takes a really stupid or deluded individual not to appreciate that, and the stupid often don’t realise how stupid they are, hence they think the games they are stupid enough to play are the games everyone plays.

      I carried my doubts about AGW all the way through to 2007*, embarrassingly late, I was guilty of sloppy thinking and an unacknowledged dislike of the implications (I sold my car in 2007 and haven’t owned one since). At this late stage the remaining denialists are a mixture of the pathologically deluded and the arse end of the IQ distribution.

      *I thought the sun was playing a larger role in GW than most accepted, then I read this paper:
      https://www.ethz.ch/content/dam/ethz/special-interest/usys/iac/iac-dam/documents/people-iac/wild/2006GL028031.pdf
      (para 14)

  5. Well – there’s always a 61-month running average on the observed data that can – when ‘properly applied’ – cut the last 2 1/2 years from the record so it would appear that the real data would run on the lower end of the models.
    Just ask Bob Tisdale… :)

  6. Most everyone accepts warming, but few people admit to accepting its progressive nature. No that’s not that lib-tard leftist kind of progressiveness. It’s that global warming destabilization will be progressively increasing kind. Inexorably increasing. Nobody wants to see that, nobody accepts that a the most harsh reality possible.

    It is a little distressing to skip over this fact, because otherwise it means we start with accepting global warming and then by jumping directly into charts, and math and climate models – it means people can tune out with an ocular glaze-over. We are easily defeated by complexity — when the authentic distilled statement is; “it’s climate change and it will be getting worse”, And the rate of change will be increasing, so we want to deal with it sooner rather than later. . So only after absorbing that lesson might we want to look at the science of models for the opportunities to influence change.

    • What bugs me is that some people, even reps from Sierra Club, seem to have been trained to believe/say that if emissions stop, atmospheric CO2 concentrations will begin a significant decline. I have to keep pointing people at the Archer, Eby, Brovkin, Ridgwell, Cao, Mikolajewicz, Caldeira, Matsumoto, Munhoven, Montenegro, and Tokos paper (“Atmospheric lifetime of fossil fuel carbon dioxide”) to get them to see how much of a mess we’re getting ourselves into. Quoting:

      The fate and lifetime of fossil fuel CO2 released to the atmosphere is not inherently scientifically controversial, but the packaging of this information for public consumption is strewn with such confusion that Pieter Tans proposed in print that the entire concept should be banished (Tans et al. 1990). How long is global warming from CO2 going to last, policymakers and the public would like to know. If there is a trade-off possible between emissions of CO2 versus emissions of other greenhouse gases, how shall they be compared? The lifetimes of greenhouse gases are incorporated into the construction of global warming potentials, the time-integrated climate impact of each gas relative to CO2.
      .
      .
      .
      There is a strong consensus across models of global carbon cycling, as exemplified by the ones presented here, that the climate perturbations from fossil fuel–CO2 release extend hundreds of thousands of years into the future. This is consistent with sedimentary records from the deep past, in particular a climate event known as the Paleocene-Eocene thermal maximum, which consisted of a relatively sharp increase in atmospheric CO2 and ocean temperature, followed by a recovery, which took perhaps 150,000 years (Kennett & Stott 1991, Pagani et al. 2006) (see also The Paleocene-Eocene Thermal Maximum Climate Event sidebar).

      The gulf between the widespread preconception of a relatively short (hundred-year) lifetime of CO2 on the one hand and the evidence of a much longer climate impact of CO2 on the other arguably has its origins in semantics. There are rival definitions of a lifetime for anthropogenic CO2. One is the average amount of time that individual carbon atoms spend in the atmosphere before they are removed, by uptake into the ocean or the terrestrial biosphere. Another is the amount of time it takes until the CO2 concentration in the air recovers substantially toward its original concentration. The difference between the two definitions is that exchange of carbon between the atmosphere and other reservoirs affects the first definition, by removing specific CO2 molecules, but not the second because exchange does not result in net CO2 drawdown. The misinterpretation that has plagued the question of the atmospheric lifetime of CO2 seems to arise from confusion of these two very different definitions.

      [Response: What worries me is unforseen feedbacks in the carbon cycle. I don’t worry much about the “methane bomb,” but the melting of permafrost and its consequent release of CO2 concerns me greatly. It could bring about a situation in which, even if we totally halt CO2 emissions, atmospheric concentration continues to rise dramatically.]

      • Thanks for the pointer. A PDF print is available here, for those interested:

        http://climatemodels.uchicago.edu/geocarb/archer.2009.ann_rev_tail.pdf

        It’s not incorrect to say that “if emissions stop, atmospheric CO2 concentrations will begin a significant decline,” as concentrations fall fastest at first; Archer et al.’s Figure 1, for instance, shows a 50% decline in ‘remaining emissions’ within 50-250 years for a 1000 Pg ‘slug’.

        But although correct, the phrasing you cite is misleading:

        Nowhere in these model results or in the published literature is there any reason to conclude that the effects of CO2 release will be substantially confined to just a few centuries. In contrast, generally accepted modern understanding of the global carbon cycle indicates that climate effects of CO2 releases to the atmosphere will persist for tens, if not hundreds, of thousands of years into the future.

      • (replying to myself because nesting depth exceeded)

        @Doc Snow, and everyone:

        Okay, agreed, a sufficiently small slug will get back to where we are today. It’s a question of how much atmospheric ppmv is when the global does achieve zero emissions. It ain’t gonna be 700 ppmv at the present course, especially if Solar Radiation Management is instituted. Sure, it decays quickly even if 2500 ppmv is reached, but, per the same figure, but remains above 1000 ppmv for 5000 years.

        However, while the slug effect is easy to calculate, a more realistic profile shows something else (Solomon, Plattner, Knutti, Friedlingstein, 2009):
        http://www.pnas.org/content/106/6/1704.full.pdf?with-ds=yes
        Their Figure 1 peaks at 1200 ppmv in 2100, when emissions are zeroed, and, yes, comes down to 800 ppmv by 2500, but remains above 700 ppmv through 3000. Worse global temperatures remain about +4C through 3000, and thermal expansion of oceans continues to climb past 3000.

        More recent work addresses this further, e.g,. Froelicher and Paynter, 2015, http://dx.doi.org/10.1088/1748-9326/10/7/075002, where they trace an increase to about 550 ppmv CO2, then a stop, showing, in the simpler calculations of their Figure 1, something comparable to Archer, et al. However, full CMIP5 simulations show much more serious consequences, summarized in their Figure 4.

        Another recent treatment is Clark, et al, 2016, which looks at the far future in context of the deep past http://dx.doi.org/10.1038/NCLIMATE2923, and what to me is a really scary possibility, by Ray Pierrehumbert, “Hot climates, high sensitivity”, 2013, http://www.pnas.org/cgi/doi/10.1073/pnas.1313417110.

        where he concludes:

        One sure solution to the problem posed by uncertainty of climate sensitivity in hot climates is simply not to go there. Unfortunately, it looks increasingly like Nature will step in to answer some of our questions for us, and I doubt we’ll like the answer. The highest emission scenario currently being considered by the Intergovernmental Panel on Climate Change is Representative Concentration Pathway 8.5 (8), which would bring CO2 concentrations up to 2,000 ppm, which is in the upper reaches of the range considered in ref. 2. Even this scenario can be considered somewhat optimistic, in that it assumes that the annual growth in CO2 emissions rate (which has been hovering around 3% for decades) will tail off by 2060 and that the emissions rate will cease growing altogether by 2100, whereafter emissions will trend to zero; unrestrained growth could easily dump twice as much carbon into the atmosphere. It is not known if there are actually enough recoverable fossil fuels to emit that much CO2. Hoping that we run out of fossil fuels before bringing on a climate catastrophe does not seem like sound climate policy, but at present it seems to be the only one we have.

      • Michael Sweet

        :(

      • Hyperg, thanks for yet more good links.

        You are right that Dr. Pierrehumbert’s remarks are scary. Yet, based on his analysis of the US-China emissions pact, I think he’d modify the 2013 comments if writing today.

        It’s not a slam dunk, by any means, but I think it’s now more likely that we’ll come in lower, rather than higher, than RCP 8.5. Admittedly, that’s a low bar in terms of what’s really needed.

      • Yes, I agree on RCP 8.5, if only because of the exponential penetration of solar energy. (Ranges from doubling time of 2 years to 4 years, depending on how you count, and the interval.) Still, RCP4-ish things aren’t great, and things being okay with them depends upon Climate-as-the-Beast being as benign as long term, time-averaged models suggest. The other scary thing, confirmed in a private exchange with Professor John Marshall from MIT, is the possibility of a nonlinear bifurcation, which he said he did not know any reason why it could not occur very rapidly. I had asked about that, since after my scholarship, there was some notion that inertia suggested the climate system could not change that fast. The nonlinear dynamics people suggest that if such was in the works we’d never see it coming. (There is this “slowing down” stuff that’s been investigated, but I don’t think it became diagnostic.) Wally Broecker’s judgment still rules.

  7. Very good post,
    The true global SAT is somewhere between GIstemp loti and Gistemp dTs, but I actually believe that it is a little closer to Gistemp dTs.
    Look at this chart where I have masked out the land and ocean part of Gistemp dTS (It is actually the “map” version Giss 1200 km land) plus ERSSTv4 and compared the with corresponding model output:
    https://drive.google.com/file/d/0B_dL1shkWewaeEc0MVdSZTFLX1k/view?usp=docslist_api

    The chart shows that dTs SAT over oceans is a much better fit to model SAT over oceans, compared to ERSST v4 which is used by Gistemp loti as a proxy for SAT.
    One should remember that the Gistemp 1200 km extrapolation of met stations doesnt cover the whole ocean like the model output. However, the Gistemp dTs method seems to give a fair balance of land and sea, which I have checked by reconstructing a global dTs index that includes 29 % of the land SAT and 71 % of the ocean SAT (with the data in the chart above)

    Knowing that Gistemp dTs may exaggerate the global SAT a little, I have given it a tough match vs the six member CCSM4 Ensemble, which has a quite large climate sensitivity: https://drive.google.com/file/d/0B_dL1shkWewaYzdCMVpjQ1R1Zjg/view?usp=docslist_api
    And behold, doesn’t Gistemp dTs look like a member of the CCSM4 family?
    I have not used the recent year to date as a last data point (it would look crazy) but rather the most recent 12 month mean..

    Another advantage of Gistemp dTs vs Loti is that it hasn’t the (probably) spurious large spike in WW2, that stems from ERSST v4 (less pronounced in HadSST3).

  8. RCP == ‘representative concentration pathway’; http://sedac.ipcc-data.org/ddc/ar5_scenario_process/RCPs.html

    (Sorry, stoking up the quarterly copy edit mode ATM…)

  9. Trump and family will lose more wealth to AGW than most of us.

    This has already started as his property insurance rates have risen as insurance and re-insurance companies have seen more and larger claims resulting from more intense storms.

    Nothing like increased costs to get a property manager’s attention.

  10. Nice post. I had downloaded the same series during the discussion at ATTP. Just wondered what is the “best” / “official” way to remove the seasonal cycle? Taking the mean of that cycle over 1900-2000 and subtracting that from each year does not work so well: in the forecast years these cycles grow in size.

    • @mrooijer, canonical ways of removing seasonals uses spectral methods, per Park, http://dx.doi.org/10.1029/2009GL040975. The other way is to isolate various components as trend vs periodic vs secular vs random. With these “periodic” components are most generally defined as a set of contributions which always sum to zero for a fixed duration T. This can be done with dynamic linear models (see Harvey or Petris) and the trend is what one then uses.

  11. Bernard J.

    The seriously concerning point here is that the data are matching the RCP8.5 trajectory.

    This should be raising eyebrows, and clenching sphincters.

    [Response: In the brief span of this century elapsed so far, the model results are similar for all scenarios.]

    • “In the brief span of this century elapsed so far, the model results are similar for all scenarios.”

      Yes. The choices adults–and especially older ones–make now will impact not their own lives so much as they will the lives of their kids and grandkids. In this respect, climate change is very much a matter of intergenerational justice.

  12. E. Swanson

    Regarding Christy’s graphs which were the focus of Gavin Schmidt’s RC post, I just found a comment by John Christy on climate audit in which he describes his weighting functions used to modify the CMIP-5 model results for his graphs. I’m wondering exactly where Christy found the model results at different pressure levels to be able to produce the TMT adjusted model results which he claims to have plotted. His figures note that his source is the archived data from the KNMI Climate Explorer, however, it appears to me that only surface data is available. Am I missing something here?

  13. Tamino, do you mean the GISTEMP dataset “Means Based on Land-Surface Air Temperature Anomalies Only (Meteorological Station Data, dTs)” with “meteorological stations (“met”)”?

    That is the land only temperature increase. The land warms much faster than the oceans, 1.6 to 2 times faster. Thus if you used this data, you should compare this to the land-only temperature of the models, not their global (ocean+land) temperature.

    P.S. I do not like plotting a 4-month average temperature in a plot with otherwise annual averages. The 4-month average will have a larger variance. Tim Osborn solved this elegantly by computing 12-month averages where the last month is the last month for which you have data.

    • Victor,
      Gistemp dTS is a global SAT index with ocean SAT inferred by 1200 km extrapolation from island and coastal met stations. A pure land-masked SAT index has a much higher trend. Here’s a trend comparison:

      C/decade 1970-2015 for various Observational temperature indices
      A) 0.177 Gistemp loti (about 29% land SAT + 71% SST)
      B) 0.214 Gistemp dTs (Global SAT)
      C) 0.223 Gistemp 1200 land (map version of dTs)
      D) 0.272 Gistemp 1200 land with Land mask (Land SAT)
      E) 0.192 Gistemp 1200 land with Ocean mask (Ocean SAT)
      F) 0.215 Gistemp dTs reconstruct (29% land SAT + 71% ocean SAT)
      G) 0.118 ERSST v4

      Here are corresponding values for CMIP5 rcp8.5 model means
      A) 0.179 CMIP5 blend (29% land SAT + 71% SST)
      B) 0.209 CMIP5 Global SAT (also compares to C & F above)
      D) 0.286 CMIP5 Land SAT
      E) 0.182 CMIP5 Ocean SAT
      G) 0.135 CMIP5 Global SST

      It is a quite good fit between the observational and model trends. Gistemp loti and dTs are spot on compared to their model equivalents, rounded to two decimals.

      One weakness with Gistemp dTs is that it only covers about 80% of the globe, or near 100% land but only 70% of the oceans. The reconstruct (F) above proves that the dTs area weighting method compensates fairly for the lower ocean share.
      With 1800 km extrapolation about 95% of the globe would be covered.

      Also, It is likely that temperature trends of coastal and island stations is slightly larger than the Ocean SAT that they represent, but not very much as suggested by he comparison above.

      Gistemp dTs is probably the best comparison for Global SAT from models. On my wish-list is Crutem4 kriging (Cowtan&Way) and BEST land (also kriging) without land masks and near complete global SAT infill, but the producers have not shown any interest..

      • Thanks. Should read up on what Gistemp dTs does. A bit of a blind spot for me because they do not statistically homogenized the data themselves.

        The Hadley Centre is working on an interpoloted/kriged version of HadCRUT. They do listen.

  14. Has anyone looked at Caldeira and Myhrvold?
    http://iopscience.iop.org/article/10.1088/1748-9326/8/3/034039
    Projections of the pace of warming following an abrupt increase in atmospheric carbon dioxide concentration

    K Caldeira1 and N P Myhrvold2

    Published 30 September 2013 • 2013 IOP Publishing Ltd
    Environmental Research Letters, Volume 8, Number 3

    …. The temperature response of atmosphere–ocean climate models is analyzed based on atmospheric CO2 step-function-change simulations submitted to phase 5 of the Coupled Model Intercomparison Project (CMIP5). From these simulations and a control simulation, we estimate adjusted radiative forcing, the climate feedback parameter, and effective climate system thermal inertia, and we show that these results can be used to predict the temperature response to time-varying CO2 concentrations. We evaluate several kinds of simple mathematical models for the CMIP5 simulation results, including single- and multiple-exponential models and a one-dimensional ocean-diffusion model. All of these functional forms, except the single-exponential model, can produce curves that fit most CMIP5 results quite well for both continuous and step-function CO2-change pathways. …