You probably recall that not too long ago, Kevin Cowtan and Robert Way re-processed the data used in forming the HadCRUT4 global temperature data set. Their goal was to interpolate across unobserved areas in the best way available, by Kriging. They also used satellite data to supplement the interpolation.
As I’ve said before, since the Berkeley team released their “methods” paper I’ve believed that Kriging is the best way to approach the interpolation issue. It was one of those “Doh! — Why didn’t I think of that!” moments. Therefore, despite its relative newness, I think we should consider treating this data set as one of the “main” global temperature data sets. Only time will tell whether that comes to pass.
Of course, for that to happen they’ll have to update their data set regularly to reflect new input data. They have indeed updated their data to extend through December of 2013. The updated version utilizes Kriging, but does not supplement data with satellite observations. Let’s see what the new “Kriging” version has to say about recent global temperature.
Here are annual averages of global temperature anomaly according to their data:
The most important difference is for very recent temperatures (since about 2000). Because the Kriging better interpolates sparsely-observed regions, and because one of those sparsely-observed regions is the Arctic (which has warmed so much recently that even the IPCC reports can’t keep up), the impact of allowing a better interpolation across this region is to increase the most recent temperatures.
The net result is to cast serious doubt on the much-discussed “pause” in global surface temperature. Let’s try an experiment: Start with the data after 1979 and before 2000. Fit a straight line by linear regression. Extrapolate that line into the future, and surround it with lines extending 2 standard deviations (of the residuals) above and below the extrapolation to define a “range of expectation.” I’ve done this exercise on a number of occasions. I’ve even chosen 1998 as the “boundary” between calculation and prediction for the specific purpose of showing the foolishness of the “no global warming since 1998″ claim, but the year 2000 is chosen for no other reason than it’s a nice round number. Consider whether the data since 2000 have wandered outside that range of expectation.
Clearly, the data since 2000 have not wandered outside the range of expectation. If anything, they have on the whole been warmer than the projection, but the difference is nowhere near statistically significant.
Why did I pick 1979 as my starting point? Because that’s the starting time for the estimation of “adjusted” data which are compensated for the influence of el Nino, volcanic aerosols, and solar variations as was done here. I computed the same adjustment for the Cowtan & Way data. That results in this model of temperature variation (black is monthly data from Cowtan & Way, red is the model):
This enables us to remove the estimated influence of el Nino, volcanic aerosols, and solar variations, to compute “adjusted” data, giving this (annual averages):
Once again, values since 2000 have not wandered outside the range of expectation. The plain truth is that neither the adjusted nor the unadjusted data support the claim that there has been a “pause” which is statistically significant. Yes, there are still fluctuations so any one year isn’t necessarily hotter than its predecessor, but the evidence for a “pause” just isn’t strong enough to declare that such a thing has even happened. The unadjusted data from 1979 to the present indicate warming at an average rate of 0.017 +/- 0.003 deg.C/yr, while the adjusted data indicate warming at an average rate of 0.018 +/- 0.002 deg.C/yr.
Here’s hoping that Cowtan & Way continue to update their data on a regular basis, so it can at least be among the “main” global temperature data sets. I’m also glad that they’ve provided their gridded data so we can study temperature in specific geographical regions.