Global Warming: How Far to 1.5°C?

There’s been a lot of talk recently about limiting global warming to 1.5°C, mainly focused on two things: 1) how important it is, and 2) how difficult it will be. This raises an important question: how far have we come already, and how much farther until we reach the 1.5°C limit?

The threshhold 1.5°C refers to how much we have warmed the planet above its pre-industrial temperature, but that of course begs the question, what was pre-industrial temperature? It’s easy to say it’s the long-term average around the year 1750 (about the start of the industrial revolution), but we don’t have enough historical thermometer data to know precisely what that was. Various analyses have substituted a different reference level based on a different reference time. For example, some analysis uses the average temperature over the period 1850-1899, some use the average over the period 1861-1880. These aren’t pre-industrial times, but at least they give us a place to start, i.e. somewhere around the late 19th century.

A difficulty using reference periods 1850-1899 or 1861-1880 is that not all global temperature data sets cover that period; in particular, data from NASA and from NOAA don’t start until 1880. There’s also the fact that true pre-industrial was probably cooler than the late 19th century, an issue studied by Schurer et al. specifically in the context of limiting future warming to prominent threshholds (1.5°C or 2°C). They conclude that “pre-industrial” was probably cooler than the late 19th century by as much as 0.2 °C. They also show that even 0.2°C difference can have an important impact on the likelihood of staying below threshholds, and the timing of exceeding them if we do cross those lines.

Another complication is that most estimates of how far we’ve come are based on the global temperature from the Hadley Centre/Climate Research Unit in the U.K. It’s a fine choice but possibly not the best, because by omitting the Arctic (the fastest-warming region on earth) it may underestimate the total temperature increase.

For this blog post, I’m going to set my reference level to the smoothed mean temperature in the year 1900. This is rather like the 1890-1909 average. There are some advantages to this, including the fact that it’s a wee bit cooler than most “late 19th century” averages (but by less than 0.1°C) so it won’t be as much warmer than true “pre-industrial”, and that data coverage is a bit better around 1900 than before 1900 (especially 1850-1899, when data coverage during 1850 is really very poor).

I’ve looked at the five most prominent global temperature data sets, HadCRUT4 (from the Hadley Centre/Climate Research Unit in the U.K.), NASA (GISS, the Goddard Institute for Space Studies), NOAA (National Oceanic and Atmospheric Administration), Cowtan & Way (a modified, and in my opinion improved, version of the HadCRUT4 data), and Berkeley (an independent effort, the Berkeley Earth Surface Temperature project). I’ll smooth each time series (with a modified lowess smooth), then offset them so that the value during the year 1900 is zero (making the 1900 smoothed mean the reference level). Then we’ll look at where we stand, in particular at the smoothed value now so we know how far the trend has gone apart from the never-ending fluctuations.

Here are the smoothed temperature series from each data set:

All are set to a level of zero during the year 1900. That make the value now an estimate of how much we’ve warmed the planet, and here are the numbers I get:

  • HadCRUT4: +1.05°C
  • NASA: +1.14°C
  • NOAA: +1.10°C
  • Cowtan&Way: +1.11°C
  • Berkeley: +1.19°C

    The first thing to note is that for all five data sets, we’ve already heated up the earth by more than 1°C. The least is according to HadCRUT4, only 1.05°C, while the most is from the Berkeley data, 1.19°C. If our goal is to avoid going above 1.5°C, then the Berkeley estimate gives us a third less room to work with than the HadCRUT4 data.

    That’s not an insignificant difference. It means that the available carbon budget (how much we can still emit and stay within threshhold) is a third less; a 300 GtCO2 emission limit using HadCRUT4 data is only a 200 GtCO2 emission limit using Berkeley data.

    All of which emphasizes just how important it is to begin drastic emissions reduction now.

    Even if we do decide on an aggressive emissions reduction plan and actually stick to it, there are other things to worry about. There really are feedbacks in the climate system and some of them might get ugly. Best-known (and probably most-worried-about) is permafrost melt, which threatens to add even more CO2 to the atmosphere than we already have. Perhaps the best approach to these is: don’t poke the bear.

    There there are man-made aerosols, which cool the planet, but if we stop emissions we’ll stop most aerosols too and end their cooling influence. Estimates of how much man-made aerosols are cooling the world vary widely, but 0.5°C is a common value. If we eliminate 0.5°C aerosol cooling, we’re over 1.5°C already, but again there are complications and the rapid decay of short-lived greenhouse gases may offset most of that, making the net long-term effect nearly zero. It’s … complicated.

    There’s also the fact that most near-term forecasts are based on a linear approximation of the climate system’s response to perturbations. We all know that linear response is usually close and often useful, but we also know that the system isn’t linear, and when the nonlinearity kick in things can get very ugly.

    All of which emphasizes just how important it is to begin drastic emissions reduction now. Most of the world seems to be waking up to the fact of just how important and how urgent this is. Unfortunately, the United States is going the wrong way at just the wrong time.

    There’s sure to be disagreement about many of my choices and results. But only those in serious denial disagree with how important it is to begin drastic emissions reduction now.

    This blog is made possible by readers like you; join others by donating at My Wee Dragon.

  • 25 responses to “Global Warming: How Far to 1.5°C?

    1. I like the fixed baseline data set, but I have a question.

      It was stated that 1.5C is an important number. However, 1.5 above what? If that was 1.5 above the early 20th century that would mean one thing. If that was 1.5C above an estimate of pre-industrial (i.e., the 18th century) that that is another thing.

      It seem to me that what you did to the best available date here, needs to also be done to the concept of 1.5C.

      Putting this another way, perhaps we should take the original argument, that there is a target temperature we should be identifying as a goal (to not pass), and apply that to the combined/parallel data you have here, and re-ask the question. What is the temperature we don’t want to go past, as measured specifically on the above charged Tamino Baseline.

      I have yet to see a conversation about this that does not quickly shift topic to the meaning of the target temperature, whether there are tipping points, etc. etc. and never circle back to the original question. Maybe we can do it differently here. Given the graph above, where is the line that is equivalent to the IPCC’s 1.5C, regardless of the actual meaning or utility of that 1.5C

      It is a little like giving directions and saying, “go 100 miles west, you’ll find the thing you are looking for” but then you move 20 miles and say the same exact thing again. You can’t. You have to adjust it to 80, or 120 miles.

      If we are truly concerned with the temperature increase SINCE pre-industrial, i.e, an average over a period of time that includes known major variations such as the MWP and LI (so, say, the average across 1500-1700) then that 1.5 should really be something like 2.1 If we are going to use as the ‘can’t get hotter than X” where X is 1900, then maybe we have .5C left.

      • Timothy (likes zebras)

        The 1.5C limit will be derived from climate model predictions. These simulations generally start in the late 19th century (say 1860) to try to reproduce observed temperature change before moving onto the future.

        So the models would use the same baseline – the late 19th century. The fact that this is already warmer than the true pre-industrial period is actually fairly irrelevant to our purpose since the same baseline would be used for observations and predictions.

        However, the differences between the observation datasets is potentially more important. It’s actually worse than tamino states because it’s not just that if Berkeley is accurate that we have less warming to go until we hit 1.5C, it’s also that it would imply the climate is more sensitive to the emissions we have already made (about one eighth more sensitive).

        Therefore a 300GtC budget would have to be reduced not to 200GtC, but by a further ninth, to 178GtC.

        It really is important that we do as much as we can as soon as we can – which is drastic cuts right now.

        • JessieHenshaw

          Timothy, “However, the differences between the observation datasets is potentially more important. It’s actually worse than tamino states because it’s not just that if Berkeley is accurate that we have less warming to go until we hit 1.5C, it’s also that it would imply the climate is more sensitive to the emissions we have already made (about one eighth more sensitive).”

          What matters is not really the temperature, but whether the predicted severe consequences are coming in ahead of or behind schedule. It’s a gambling decision. I think we all recognize that both perceptually and scientifically the climate changes seem surprisingly more severe than anticipated. That trend might possibly reverse naturally, by some totally unanticipated means, but the question here is what should we bet the habitability of the earth on.

          We now have 50 years of clear data saying that the combined national, business and community efforts on sustainability seems not have had any global effect at all, for another example. Does that mean we should continue betting the earth on the idea that a few profitable efficiencies are all that we need turn around the economy’s ever faster increasing environmental impact? No, of course not. Still it’s very hard to get people to turn away from milking their sacred cows.

    2. Thanks Tamino, for the best, most succinct summary I have read. Thanks for all that you do in these ferociously interesting times.

    3. Thank you for a clear presentation of the heat already generated. I like your starting point at 1900 and how you calculated it. I am surprised to hear that you think that that additional heating from aerosols might actually zero out, but that’s good news and I generally trust your analysis, so I am going to just savor a happy surprise with the aerosol issue. Thank you, well done.

    4. JessieHenshaw

      I did the math for climate warming by using constrained exponentials, projecting from the start of growing CO2 pollution in 1780. It’s really clear point and place in time in the data, and constrained by modern data produces very close to the same 2040 figure the IPCC arrived at.

      What everyone seems to be really overlooking is that it’s the investor decisions that built our whole economy around the ever faster consumption of fossil fuels, so we won’t get ourselves out of this mess till we change their formula. The data also shows that the considerable efforts to slow CO2 pollution growth, governmental, industrial and social movements, have had precisely zero effect over the last 45 years.

    5. We investigated what the different datasets could mean for how close we are to 1.5 or 2 C.

      We only looked at HadCRUT, Cowtan & Way and Berkeley because they all use the same ocean temperature record, so differences are purely because of how they handle unsampled regions and sea ice.

      HadCRUT has got a bit less biased over recent decades because coverage is better since the 1970s. We think that this century it’ll show about 10 % less warming than Berkeley. If you use HadCRUT4 to track global warming, then your carbon budgets go up by quite a bit, about 70 billion tonnes of carbon (250 billion tonnes of CO2).

    6. Ralph Feltens

      “If we eliminate 0.5°C aerosol cooling, we’re over 1.5°C already, but again there are complications and the rapid decay of short-lived greenhouse gases may offset most of that, making the net long-term effect nearly zero. It’s … complicated.”

      I’m still wondering …

      According to this EPA-chart ( ), short lived greenhouse gases (excluding NOx with a persistence of more than 100 years) currently account for not more than one third of radiative forcing (the energy imbalance that heats up the planet). So, with an immediate stop to all anthropogenic emissions, current rate of temperature increase should drop by about one third (to 1,1 °C / century) within the first decade or so (the lifetime of most short-lived greenhouse gases), but continue to rise for a long time after that (due to persistent CO2).

      However, the jump by 0,5 °C from loss of aerosols will be felt more or less immediatly (i.e. within the first decade).

      The more I think about these longer term developments, the more depressed I am becoming …

    7. I agree we should have started yesterday. If you want to be part of the solution we just published a book “Driving to Net 0 – Stories of Hope for a Carbon Free Future”.

      The book has 15 case studies of households around the USA and Canada who have cut their carbon emissions by 75% or more. Learn how you can become carbon neutral and save money as well.

    8. Here’s a newly-quantified carbon feedback that will push us over 1.5C pretty much all by itself: (Unlike permafrost e.g., note this is a direct CO2 feedback, so there’s no convenient lag and solar geoengineering won’t help.) My personal assessment is that by the time all of these known unknown feedbacks are accounted for we’ll be looking at a commitment to at least +3C. Some scientists have noticed, although their views didn’t make it into the SPM. Here’s Mario Molina quoted in the Guardian this week: “But even with its description of the increasing impacts that lie ahead, the IPCC understates a key risk: that self-reinforcing feedback loops could push the climate system into chaos before we have time to tame our energy system, and the other sources of climate pollution.”

      • Steve, the main carbon feedback is still the use of profits to multiply processes, the core driver of growth and all its impacts. “The growing rate of warming” shows how warming is part of the economy built for the use of fossil fuels, and accelerating their use as a direct effect of its growth.
        So to change that you’d need to change the way investors and businesses make decisions.

      • nielsengammon

        Steve –
        No it won’t. This effect (and all other linear fast feedbacks) has already been fully accounted for in historical temperature changes, observed ocean heat uptake, and estimates of climate sensitivity derived from them, because they’re already happening. There may be hundreds of other fast feedbacks we don’t know about, but on balance they produce the temperature sensitivities that we actually observe.
        The things you need to be scared about are feedbacks that are nonlinear or threshold-dependent and poorly constrained by data. Permafrost fits into that category, but optimists may note that “poorly constrained by data” works both ways.

        • Hmm. But the authors say:

          “In addition to a weakening plant carbon sink, the simulations (…) indicated that global temperatures could rise an extra 0.3 to 1.4 degrees Celsius beyond what has already been projected to occur by scientists studying climate change.”

          On the face of it that would seem to contradict your assertion. Any explanation is appreciated.

      • Jumping in, probably presumptuously. (But sometimes you learn stuff that way.)

        I would guess that the quote from the paper refers to a pure modeling situation. If the feedback isn’t modeled, its effects going to be additive with respect to other modeling exercises.

        By contrast, (and if I understand correctly) Dr. Nielsen-Gammon is talking about projecting warming based upon “historical temperature changes, observed ocean heat uptake, and estimates of climate sensitivity derived from them”. As he points out, just because we didn’t previously know that leaf thickness was increasing doesn’t mean that it wasn’t already affecting climate. Since we were observing the net result of all real-world feedbacks, known or unknown, when we observed actual temperature rises, the leaf thickness feedback was ‘baked in.’

        Which perhaps raises a somewhat Tamino-esque question: do climate sensitivity estimates based upon observations and paleoclimate history tend to differ systematically from those based upon modeling? If the latter tend to be biased low, that might suggest that we’re still missing significant feedbacks such as this novel leaf thickening thing. If not, then either not a lot is missing from the models–relative, at least, to sources of error. (We already know these are significant, since despite strenuous and prolonged effort, the range of sensitivity estimates still isn’t narrowing that much.)

        • JessieHenshaw

          I agree that the real systemic feedback is not via carbon pollution, though that does display systemic positive feedback statistically. The real feedback is via fossil fuel energy, still growing as fast as ever today (see links). What the discussion most seems to skip over is how deeply embedded that energy-multiplying-energy explosion is in everything we do. So that our economy turns every productive innovation into new ways to multiply energy use, that has historically followed an effectively constant rate of feedback all the way from its first emergence as coal driven industry in 1780.

        • If he’s referring to Earth System Sensitivity, that resolves the apparent contradiction. But the authors weren’t referring to it,nor do the model results featured up front in the new SPM. The trick here is that while we know ESS incorporates everything (but see below), that knowledge says nothing about the timing of a particular feedback.

          Note ESS is an equilibrium sensitivity estimated from past warm steady state climates. So, does ESS tell us much about the potential peak of a unnaturally fast climate transient like the one we’re in now? It would seem that it couldn’t. Indeed, the way we would get a temp peak substantially exceeding ESS would be for a number of large components like this leaf feedback to have much of their effect early rather than spread out over centuries or millennia.

        • Doc, yes, that’s what I had in mind.

          In response to your Tamino-esque question, it really is Tamino-esque because it depends on what you mean by ‘differ systematically’. For Transient Climate Response (TCR), the observed and model estimates are rather similar. For Equilibrium Climate Sensitivity (ECS), there are larger differences in the means, but many observed estimates match many single-model-based values, and apart from a possible feedback gap there are other known causes of differences. These include modeled subtropical low cloud behavior and not-really-correct assumptions inherent in the observation-based approach, and nobody thinks that those known unknowns are small enough that we could see evidence of unknown unknowns in the differences.

          (Lewis and Curry 2018 spend an entire section arguing with a previous paper about whether the model-observation differences can be accounted for by known knowns. This quasi-exchange is useful here for understanding why we’re not ready to assume we’re seeing evidence of unknown unknowns in the differences.)

          Even for ECS, paleo data does little more than exclude some of the extreme outliers on both tails. With Earth System Sensitivity (ESS), the comparison is model vs paleo since observations aren’t long enough, and they’re both in the same ballpark but with such large uncertainties that about all you can say is that we’re not missing something really huge. ESS is not very relevant for the 1.5C vs 2.0C discussion because both scenarios would probably turn global temperatures around fast enough to avoid substantial ESS-type feedbacks. But we’re actually likely to reach temperatures well above 2.0C, and then we’ll find out whether we can stop an ESS-type feedback once it really gets going. Or maybe your kids will find out.

    9. Excellent. I’ve bookmarked this. Most of those datasets show a far scarier picture than the oft quoted 1C that we’ve warmed already. I discount the HADCRUT4 data, given the lack of Arctic coverage, so am fairly certain that we’re at least 1.2C above pre-industrial.

    10. JessieHenshaw

      If you take “pre-industrial” to mean prior to the beginning of industrial CO2 pollution, that would be before 1780. My analysis seems to so closely fit the IPCC’s for where we’ll be in 2040, I think I can fairly claim to have a good estimate of how far above the 1780 average temp we are now. Reading from my graph, that is now 1.28 deg C warming since 1780, or 0.94 deg C since ~1917 when the average trend crossed the 1850-1900 baseline average .

    11. Richard Treadgold

      The first thing to note is that for all five data sets, we’ve already heated up the earth by more than 1°C.

      I see no mention of adjusting for natural variability, which means you’re asserting there was zero natural warming during this time, though we were emerging from the LIA. What happened to that warming?

      [Response: Tell ya what, bro … I think you’re full of shit, but when you’re ready to come back to sanity we’ll welcome you.]