The “Heartland Institute” is hosting their 14th annual “ICCC” convention this weekend in Las Vegas, Nevada, to carry on their mission as one of the world’s leading organizations of climate deniers. One of the sessions listed on their schedule for this very morning is about the latest global temperature trends, hosted by none other than Anthony Watts, Roy Spencer, and Ross McKitrick. I’m familiar with their work.
I thought it would be a good idea to present an honest appraisal of the subject at hand.
Let’s study the four best-known records of global temperature, namely the data from: NASA (NASA’s Goddard Institute for Space Studies), Berkeley (Berkeley Earth Surface Temperature Project), NOAA (National Oceanic and Atmospheric Administration in the U.S.), and HadCRUT4 (Hadley Centre/Climate Research Unit in the U.K.). They are estimates of global surface temperature, based on thermometer readings from surface stations and measurements of sea-surface temperature from ocean vessels as well as satellites. All of them provide monthly averages of global temperature anomaly, which is the difference between global temperature during that month, and the average for the same month during some “baseline” period of reference. If there’s a trend in temperature, it will show in temperature anomaly.
The zero point of temperature anomaly is arbitrary, and differs between different data sets, so I’ll re-set it for each so that they all have the same pre-1900 average value, equal to zero. Without further ado, here they are:
It’s hard to separate the different data sets because they’re so close to each other, they nearly plot on top of one another, and for the color-blind the effort is futile. But perhaps you can see that the red line (Berkeley) ends up highest while the blue line (HadCRUT4) ends up lowest.
It’s much easier to tell the different data sets apart if I plot yearly averages rather than monthly, and use different symbols (as well as different colors) for the different data sources:
Now it’s rather plain to see that the Berkeley data (red triangles) end up highest while HadCRUT4 (blue x’s) is lowest.
I fit two statistical models to each data set, in order to estimate how fast it’s rising or falling — the trend we seek. One is my favorite lowess smooth, which I’ve programmed to include calculation of the rate of change and its uncertainty (accounting for autocorrelation). Another is a “linear spline,” a function which is piece-wise linear but the segments meet at their endpoints so the fit is continuous, and chosen to best-fit the data. The endpoints are the “knots” of the spline, which I chose at 20-year intervals of time. I programmed that to compute the slope estimate (and its uncertainty), which is constant throughout the 20-year time span of each segment; that doesn’t mean the slope is constant, just that the slope of the model is constant so of course the estimate is too.
Here’s a sample of the two fits, in red for the lowess smooth and in blue for the linear spline, when applied to the data from NASA:
We can notice right away that according to both models, global temperature from NASA has trended up (not just fluctuated up) to about 1.2°C above its pre-1900 level.
Both models return estimates of the trend, the rate of increase or decrease, which I will show for the lowess smooth as a thick red line with pink shading around it for its 95% confidence interval. Do bear in mind, that we expect the real value to stray outside of the 95% confidence interval about 5% of the time. I’ll show it for the linear spline as a thick blue line, with dashed blue lines above and below for its 95% confidence interval. Here they are for the NASA data:
Before 1920 there is no trend at all that we can conclude with confidence — or more properly, a flat trend. But from 1920 to 1940 temperature rose at about 0.018°C/year, or about 1.8°C per century. Then from 1940 to 1960 the trend is probably negative (cooling) but not by much and we can’t be sure. From 1960 to now earth is getting hotter.
Of special interest is the most recent behavior; the data suggest that warming happened faster during the post-2000 period than the 1980-2000 time span, but the “error bars,” the uncertainty range, is big enough that we can’t really be sure.
That’s according to the data from NASA. The Berkeley and NOAA data tell very much the same story, but that of the HadCRUT4 data is different, especially since 1980, when the other three indicate the warming rate is rising but HadCRUT4 suggests it hasn’t really changed much at all. Here’s the warming rate over time, according to all four data sets, estimated using the lowess smooth:
Like the NASA data, the continued increase in the global warming rate itself suggested by data from Berkeley and NOAA doesn’t reach “statistical significance” for the difference between the 1980-2000 rate and post-2000 rate. HadCRUT4 actually suggests a slight decrease, but that too doesn’t achieve statistical significance (not even close).
Bottom line: according to three of the four data sets, there is suggestive but inconclusive evidence that the trend rate itself was higher post-2000 than during the score years 1980-2000, according to HadCRUT4 there’s no such evidence, and according to all four it is possible the trend rate has been constant (at about 0.019°C/year) since 1980.
The reason we can’t be sure of its continued increase is that the noise level is so big; big enough to blur our view of the trend. It would certainly help if we could remove the noise, but if the noise is happening for no apparent reason then it’s futile to try to tell what it is, and what to remove.
Much of the fluctuation appears to happen “for no apparent reason,” but some of it is due to known causes like volcanic eruptions, the el Niño fluctuation, and changes in solar output. We can estimate what those fluctuations are, not by guessing but by multiple regression (if you’re interested in more detail, see this). If we can estimate those fluctuations, we can remove them and get a better picture of how temperature is changing apart from those known causes.
For the most recent period, since 1980, we have two additional often-referenced data sets to use. They don’t estimate surface temperature, but “TLT”, atmospheric temperature in the lower troposphere, based on satellite observations of microwave brightness: RSS (from Remote Sensing Systems) and UAH (Univ. of Alabama at Huntsville). I’ll re-set the zero point so they all have average value zero after 1980. I’ll plot yearly averages with different symbols and colors for different data sources, so you have a fair chance to actually tell them apart:
For analysis purposes I’ll use the monthly data, and here they are for all six sources, which shows you just how much they fluctuate naturally:
That’s a lot of fluctuation! But when we remove the changes due to known causes, we’re left with the changes due to other causes (including global warming), and it looks like this (plotted on the same scale):
Indeed, the fluctuation is reduced greatly and we expect our trend estimates to be more precise, i.e. the uncertainty will be reduced.
Repeating the comparison of the global warming rate over 20-year time spans that I used for the long-term data, this time using the adjusted data for NASA, I get this:
This time, the difference is statistically significant; earth warmed faster since 2000 than it did from 1980 to 2000. We get the same result with other statistical tests. Reducing the noise level by removing fluctuations of known cause, has improved our precision enough to establish a change with the much-sought-after “statistically significant” label.
That’s true for 3 of the 4 surface-temperature data sets: NASA, Berkeley, and NOAA. It’s not the case for HadCRUT4, and for the lower-troposphere data sets RSS and UAH, which indicates no real rate change between the two period. The 1980-2000 rate (on the left side) and the post-2000 rate (on the right side) compare thus for the six data sets:
Both TLT data sets, RSS and UAH, indicate a constant global warming rate according to the “compare 20-year time spans” test. For RSS this seems to be a reasonable conclusion, but the UAH data show genuine changes in the global warming rate which don’t show up in this test.
In fact the UAH data are odd (compared to the others) in several ways. If we estimate the average rate of global warming from 1980 to now, the data sets say this:
They don’t agree, in fact the RSS data have definitely been heating up faster than the NOAA and HadCRUT4 data, but it’s the UAH rate that sticks out like a sore thumb. It is so much lower than all the other estimates, it casts doubt on the validity of the UAH data.
I used changepoint analysis on the UAH data since 1980, to look for changes in its warming rate. I identified this linear-spline model with changes in 2001.625 and 2014.625:
Here’s what that model says about the global warming rate in the adjusted UAH data, shown as a solid blue line with dashed lines above and below for the uncertainty range, together with a solid red line and pink shaded area for the rate and its uncertainty according to a lowess smooth:
I find it highly implausible that the rate lowered significantly during the 2001.625-2014.625 time span, and especially that it reached such low levels. For that time span, we can’t even say with confidence that the rate was above zero! No other data set allows for that possibility.
It’s yet another reason I have serious doubt about the validity of the UAH data.
This blog is made possible by readers like you; join others by donating at My Wee Dragon.