Not long ago I posted a graph, from NOAA, of the number of billion-dollar weather/climate disasters in the U.S. since 1980.
It is adjusted for inflation, although some may argue about how that was done (using consumer price index), some may point out that it doesn’t take into account the increase in population or total value at risk, and others that it doesn’t account for improvements in building codes and protective technology. And of course, the United States is not the entire world.
Let’s set all that aside, and consider whether the data as given support claims of an increase in the number of such extreme disasters, not just total but for different classes.
To test for trends we should use something more appropriate than least-squares regression. These are counts, and if the mean number (the expected value) changes then the variance will also change, so it won’t follow the constant-variance model inherent in least squares regression. Instead we’ll use Poisson regression, which is tailor-made for the purpose. It confirms right away (and overwhelmingly) that the trend in total disasters is real and strongly “statistically significant.”
However, when we look at the counts for individual types of disaster only one of them gives a statistically significant result: severe storms:
The other types show a range of responses, but the possible error in estimated rates is just too large to draw conclusions. For instance, there’s clearly no noticeable change in the number of billion-dollar freeze events (which is why I’ve plotted the estimated trend as a dashed line rather than a solid line):
Even if the trend estimate seemed to be changing, with only 6 total events in the last 36 years we shouldn’t expect the statistics to support strong conclusions.
There might seem to be an increase in the number of billion-dollar wildfires since 1980; after all, the present estimated number per year is seven times larger than the 1980 estimate. But the uncertainty is still too large to put confidence in that conclusions.
The p-value for the trend is close to the standard 5% cutoff, but at 0.057 it doesn’t make the cut.
One might be tempted to think, therefore, that the increasing trend in total billion-dollar disasters is entirely due to the increasing trend in billion-dollar severe storms. But that’s not the case; if we tally the number of “other” billion-dollar disasters, i.e. those which do not fall into the “severe storm” category, again we see a stastically significant rise:
The salient point is that even when a trend is present, if we’re looking at rare events there may be too few for trends to reach statistical significance. This problem plagues the detection of trends in disasters, and in extreme weather generally. By definition, extreme events are rare — we won’t see very many of them, so we need data for a long time to have enough for conclusions to be reliable.
In fact, some people go out of their way to limit the number of cases just when it’s most necessary to do the opposite. For example, instead of looking at tropical cyclones they may count only those in the Atlantic ocean basin, or only those which reach hurricane strength, or only Atlantic hurricanes that make landfall in the U.S., or only Atlantic hurricanes which were still at hurricane strength when making landfall in the U.S. and had over a billion dollars of insured losses. There are lots of ways to exclude the events that tell a story you don’t want to hear (more to the point, that you don’t want others to hear).
If you like what you see, feel free to donate at Peaseblossom’s Closet.