Mike Mann has written a piece for the Huffington Post about the “fat tail” of likely climate sensitivity, what it means for risk, and how discussion about it is being misused. Worth a read.
Support Your Global Climate Blog
kinimod on A High Schooler’s Take o… geoffbeacon on A High Schooler’s Take o… Bob Loblaw on Not Even Wrong? Bob Loblaw on Not Even Wrong? Tom Passin on Not Even Wrong? convictstreak on A High Schooler’s Take o… Bob Loblaw on Not Even Wrong? Cliff Mass on Not Even Wrong? jgnfld on Not Even Wrong? Bob Loblaw on Not Even Wrong? kinimod on A High Schooler’s Take o… jgnfld on Not Even Wrong? geoffbeacon on A High Schooler’s Take o… climatefreak on A High Schooler’s Take o… Lowlander on Not Even Wrong?
Buy the book
- Climate Change
- Goddard Institute for Space Studies
- Hadley Center for Climate Change
- History of Global Warming (Spencer Weart)
- IPCC (Intergovernmental Panel on Climate Change)
- James’s Empty Blog
- Old Man in a Cave
- Open Mind Archive on Skeptical Science
- Rabett Run
I really don’t understand how risk is used in this discussion…I hope you can address it further.
If a risk includes infinite consequences (extinction) why isn’t that risk as high enough to be fully definitive? Isn’t any infinite risk a high risk? Isn’t a 10% chance of extinction still to be considered an infinite consequence? (if not then 99% should neither) It is as if there are different kinds of ultimate consequences or different infinite risks. Is that just a way of rationalizing and avoiding change that is informed by risk analysis? How much infinite risk does it take to fully define a risky situation?
It really seems to me that that ship has sailed long ago. From what I read of tipping point inevitability, the eventual heating beyond levels of the survivable are inevitable (to most habitable area for human species) The only discussion is about when. I would call it a moderate risk of human extinction in this century. Higher risk of human extinction in the next century. Why is any risk acceptable? Is is because we quibble about how fast?
The fire insurance analogy is good but in a real world it should be extended so that the singular insurance policy is actually the universal insurance covering all. I may buy a policy for my house. But if my house burns down all of humanity is not affected. But for global heating, all of humanity should buy a policy insuring all of humanity. And since with catastrophic extinction (house burning down) the pay off is moot since all is gone, so then any risk, even the slightest, should be unacceptable and not tolerated. There is no minimal risk of extinction. So Insurance coverage for our universal extinction is a waste of resources. We seem to have boxed ourselves in. Quite a conundrum.
The fire insurance analogy is good but has limits. With insurance there is a larger organisation that can cope with the costs. Without that, would you not only ban smoking in your home but also anything that gets warmer than 30C and of course any use of electricity? Maybe rpauli never crosses a street because there is a low risk of losing his life which to him is viewed as an infinite risk. OTOH maybe it isn’t an infinite risk because his lifetime is limited not unlimited.
So it seems pertinent to ask why are you getting to infinite loss values? Are you assuming indefinite future for human race? Are you attaching a rising value to each life lost until the last viable life has infinite value?
Also why doesn’t discounting things back to present value result in finite values? It is possible to go either way but is growth really greater than the discount rate? Maybe discounting is just seen as an abhorrent value judgement to prefer present to future despite people doing this all the time.
Sorry can’t see how to answer without checking these basics first.
Crandles says it well. There is no infinite risk. You are going to die one day, as am I. For young people, global warming may bring that day closer.
We do many things that might cause us to die, perhaps because living without those things would rather negate the benefit of living.
Extinction seems unlikely, although a collapse of civilisation somewhat less so.
At first, I wondered, what is she thinking, but then I decided that idea should be amended to what, is she thinking?
Risk is nominally the product of cost of an adverse event and its probability of occurrence. Usually, however, we don’t have a definite idea of either. We are usually dealing with an upper bound on both. This means we are dealing with subjective probabilities–they depend on the state of our knowledge. Moreover, even defining cost can be problematic. Some costs are very difficult to monetize (e.g. loss of life), some are difficult to estimate, let alone monetize (e.g. damage to brand name due to a well publicized failure) and so on. And unfortunately, the only metric we usually have to estimate a cost is $$.
The other thing is risk analysis is a comparative exercise. This of course opens the process up to all sorts of mischief whereby dishonest brokers can compute risk with vastly different levels of conservatism to stack the deck and try to avoid dealing with legitimate risks. Some folks have made careers out of this.
Some really extreme rainfall. And, unfortunately, tragedy.
The tail would be even fatter when considering the societal costs of warming. The societal cost as a function of global surface temperature change is nonlinear. For instance, the simple Nordhaus model uses a societal cost that is proportional to the square of the increase in temperature. So the graph of the likelihood of a given cost due to a doubling of greenhouse gas concentrations would be even more skewed.
The idea of insurance is of limited use. With the climate, there’s no external body to step in and make us whole the way there is with car or fire insurance. We’re really just on our own. What we’re faced with is a slow-moving crisis of indeterminate dimensions. To take reasonable steps to avoid the worst of that crisis means we need to convince a consistent majority of people to limit their economic well-being and comfort. To do this requires them to be somewhat literate, understand a modicum of math, and to possess a capacity to envision the ends of arcane processes. All while fending off misinformation distributed by unscrupulous men with tremendous financial and political resources, .And do this consistently until technological advances make the sane, reasonable path economically advantageous. Something that could take 100 years.
It sounds like the mission assignment portion of an episode of Mission Impossible
“…until technological advances make the sane, reasonable path economically advantageous. Something that could take 100 years…”
This is, I suppose, strictly OT. But no– renewable energy sources are now reaching the grid parity milestone around the world:
“By [the end of 2016], analysts predict solar will reach grid parity in most electricity markets, helping to create a level playing field among energy producers, diversifying our power sources, offering more consumer choices and boosting the state and national economy. ”
Battery storage is coming, too:
None of which is to say that the millennium (metaphorically speaking, of course!) has arrived, or that all is well in Whoville, or that there are no longer any problems associated with our energy economy. But I think there is abundant evidence indicating that 100 years is a far longer time frame than we need in order to “make the sane, reasonable path economically advantageous.”
The comments above nicely complement one another, even those that are not direct responses to the others. Risk is tricky (vapid?) subject, mainly because different fields use the word in differently. The confusion on terminology may or may not have been helped by the ISO 31000:2009 standard and associated terminology explanations where the definition of “risk” is not “chance or probability of loss” (as it has often been used), but “the effect of uncertainty on objectives”.
I have worked with this concept in the natural hazards field, and there Risk is typically defined using Venn diagrams as the intersection between the Hazard (which may often have an associated probability, such as the odds of a flood above such-and-such a limit) and the Exposure (think of “value at risk”, see Snarkrates comment above). Very often a third set is added, the Vulnerability, – and Risk is then the intersection of all three. WG2 of the IPCC uses such a definition of risk in the AR5 report (see image on http://ipcc-wg2.gov/AR5/images/uploads/WGII_AR5_FigSPM-1.jpg), and adds stuff to the image that may not be helpful in clarifying the concept.
However, a simple way to conceptualize this three set diagram is to think of a flood exceeding a certain level (Hazard) hitting a city and exposing a significant amount of economic value (Exposure). Some parts of the city are not well prepared (Vulnerability) and will suffer economic damage. In other parts flood defences where adequate and despite these parts having similar economic value exposed as the vulnerable parts, the risk only applies to the part that was vulnerable.
This definition (and example) is not always satisfactory, for instance both the level of exposure and vulnerability will be connected to the magnitude of the hazard (think of a higher flood, making more of the city vulnerable). However, if the hazard has a probability distribution and the exposure, vulnerability are some functions of the hazard, the risk can be estimated for each level of hazard and you will be able to get the risk distribution. And you can calculate expected values.
How does this factor into climate change? This is discussed in some detail in the 2006 Stern Review where it was pointed out that there are fundamental assymmetry at play here. Uncertainty about warming has a skewed distribution with a fat upper tail. The exposure grows with warming, the economic consequences of 2 degrees is nothing compared to the consequences of 4 degrees (or higher). And the same goes for the vulnerability. This means you have to examine seriously the damage associated with warming levels that are probably unlikely.
A nice desktop excerise (you can doit using excel, or R) is to start with a slightly asymmetric distribution of warming (the uncertainty about the hazard), and assume some relationship between exposure and hazard (let the exposure grow linearly, quadratically or what ever with the magnitude of warming). For this “gedanken experiment” it does not matter too much what you do with the vulnerability (keep it fixed or let it increase too with warming). This allows you to estimate the risk for different warming values and map the distribution thereof. The point is to examine the initial hazard distribution and contrast it with the risk distribution.
The crux of the discussion in the Stern report is that the asymmetries on the relationship between hazard and exposure (and vulnerability) are such that the risk will always be very skewed. Even in the case of a symmetric risk (no fat tail in warming) the risk will be skewed to higher values. Indeed, if the relationship is sufficiently non linear and you calculate expected values from the distributions you will find that the expected risk does not correspond to the risk calculated for the expected warming, – it is higher.
In plain language that means that if we expect that the warming will be 2 degrees, but with a slight change it might exceed and go to 3 degrees, we have to pay more attention to the damage that may occur if the two degree warming is surpassed, only preparing for the damage at 2 degrees is insufficient.
Please modify “Even in the case of a symmetric risk (no fat tail in warming) the risk will be skewed to higher values.” to “Even in the case of a symmetric hazard (no fat tail in warming) the risk will be skewed to higher values. “
I wonder why Mann ignores the high-confidence paleo-derivation of ECS?
From a Hansen poster at the 2008 AGU
Now we can look at 800,000 years. The same sensitivity fits for the earlier times, even better.
Bottom line: The fast feedback climate sensitivity is nailed. It is 3 C for doubled CO2, plus or minus half a degree.
Coincidentally, Hansen also has a Huff post concerning the risk inherent in a climate-metric uncertainty:
Doubling Down on Our Faustian Bargain
OT, but of interest: the most blatantly climate-change denying leader in the western world is out on his ear, as Tony Abbott is tossed by the Australian Liberal party.
Interestingly, his successor, Malcolm Turnbull, apparently has rather different views:
I recall Malcolm’s original demise at the hands of Tony quite well. Malcolm had his party negotiate an emissions trading scheme with the then Rudd government in good faith. There were many people against this in his own party, and several within the government who weren’t whole hearted about tackling climate change. They liked coal mines.
Abbott became leader and opposed absolutely everything. His first act on assuming power after the 2013 election was to get rid of the carbon tax. He did not, however, get rid of the tax cuts that had been given in compensation for the carbon tax. So he was both environmentally and financially incompetent.
It will be really interesting to see what Malcolm Turnbull decides to do.
A bit of good news out of Oz this morning.
Please see :
As an Australian I take exception to any description of Turnbull as “left of centre.” He is clearly conservative (with some socially progressive leanings) whereas most of his party are reactionaries. Turnbull is more in the mode of European right than the utter craziness of the US right.
The main interest to those outside Oz is that under his leadership Australia will likely play a more positive role in the upcoming climate negotiations than it would have under the ousted Abbott.
I suspect that this will only be of interest to Aussies, but the climate debate here is interesting. This government has committed to 26% emissions reductions by 2030 which is on the low side but given that they repealed the best method of achieving that (a cap and trade emissions trading scheme) they don’t have the policies to achieve even that modest goal.
Tamino – I am a lurker here and have learned a great deal from your posts and the comments.
I am a kind of new entry in the climatic d iscussion but it seems to me that the Mann’s reasoning is not consequential, unless there is something missing in the article.
Can anybody explain why models predict non normal data series with a “fat tail” in the high range of the distribution, if day-to-day temperature has a normal distribution and the curve of the greenhouse effect (T *C increase) of C02 tends to saturate with increasing C02 concentration (exponential relationship)?
I would expect a negative skweness with a positive kurtosis.
I’ll give it a try, but remember I’m just an amateur.
A doubling of the carbon dioxide in the atmosphere raises the temperature a known amount, but the feedbacks are only known to a factor.
To put numbers to this, carbon dioxide raises the temperature 1.2 K, and estimated feedbacks raise that to 3.0 K, with the feedbacks providing 1.8 K of that. If we’re off by a factor of two on the feedbacks the temperature could run from 2.1 K (1.2 + 1.8/2) to 4.8 K (1.2 + 1.8*2), still with a center estimate of 3.0 K. There are other effects, and actual numbers will differ somewhat.
Tamino actually had a post on this topic some years back (probably one of the first things I read here) describing the results of a paper by Roe and Baker. The gist of it is that if our estimate of feedback is normally distributed, the modeled climate sensitivity will necessarily be skewed to the right. Small changes in feedback will result in large changes in temperature.
So it is a bit more subtle than Greg suggests, if I am reading his post correctly. It’s not the case that we can only know feedbacks to within “a factor of” some number, but rather that there is a feedback factor, which contributes to climate sensitivity in a non-linear way such that the resulting temperature distribution is skewed.
The thing I wonder about (which is fundamental, so I figure this must have been worked out somewhere already) is whether the fat tail is largely a consequence of the algorithm used. You get a fat tail if you assume feedback is normally distributed, but that is an assumption involving an entirely artificial separation of the climate system into ‘direct response’ and feedbacks.
The estimation of climate sensitivity using paleoclimate techniques (example: Hargreaves et al. 2012) often yields little to no fat tail. From a Bayesian perspective, if you have two equally valid techniques, and one yields a fat tail and one doesn’t, the combined knowledge has very little fat tail.
I heard an article on the fires in the Western USA on Public Radio today. Only Governor Brown mentioned that AGW causes drought and makes the fires worse. In the Los Angeles Times articles about the fires once again only Gov. Brown mentioned AGW making the fires worse. The reporter minimized Brown’s comments as not proven. A fire scientist was quoted as saying there is not enough data to prove that AGW made the fires worse (even though they are having the worst drought in 1000 years).
I think informed people, like the readers of this blog, should ask reporters why they do not discuss AGW with these effects. If the general public begins to associate the new weather with AGW they might want more action taken to stop it.
cn, the logarithmic saturation is compensated by the exponential increase in absorber amount, yielding a steady increase.
Tamino, this is OT, but I saw this paper relating to censored data and analysis, and since you have been thinking about this stuff, I will post the link for you here:
Also OT, but about the non-hiatus:
To quote the relevant part of the abstract:
“Our analysis reveals that there is no hiatus in the increase in the global mean temperature, no statistically significant difference in trends, no stalling of the global mean temperature, and no change in year-to-year temperature increases.”