You probably recall that not too long ago, Kevin Cowtan and Robert Way re-processed the data used in forming the HadCRUT4 global temperature data set. Their goal was to interpolate across unobserved areas in the best way available, by Kriging. They also used satellite data to supplement the interpolation.
As I’ve said before, since the Berkeley team released their “methods” paper I’ve believed that Kriging is the best way to approach the interpolation issue. It was one of those “Doh! — Why didn’t I think of that!” moments. Therefore, despite its relative newness, I think we should consider treating this data set as one of the “main” global temperature data sets. Only time will tell whether that comes to pass.
Of course, for that to happen they’ll have to update their data set regularly to reflect new input data. They have indeed updated their data to extend through December of 2013. The updated version utilizes Kriging, but does not supplement data with satellite observations. Let’s see what the new “Kriging” version has to say about recent global temperature.
Here are annual averages of global temperature anomaly according to their data:
The most important difference is for very recent temperatures (since about 2000). Because the Kriging better interpolates sparsely-observed regions, and because one of those sparsely-observed regions is the Arctic (which has warmed so much recently that even the IPCC reports can’t keep up), the impact of allowing a better interpolation across this region is to increase the most recent temperatures.
The net result is to cast serious doubt on the much-discussed “pause” in global surface temperature. Let’s try an experiment: Start with the data after 1979 and before 2000. Fit a straight line by linear regression. Extrapolate that line into the future, and surround it with lines extending 2 standard deviations (of the residuals) above and below the extrapolation to define a “range of expectation.” I’ve done this exercise on a number of occasions. I’ve even chosen 1998 as the “boundary” between calculation and prediction for the specific purpose of showing the foolishness of the “no global warming since 1998” claim, but the year 2000 is chosen for no other reason than it’s a nice round number. Consider whether the data since 2000 have wandered outside that range of expectation.
Clearly, the data since 2000 have not wandered outside the range of expectation. If anything, they have on the whole been warmer than the projection, but the difference is nowhere near statistically significant.
Why did I pick 1979 as my starting point? Because that’s the starting time for the estimation of “adjusted” data which are compensated for the influence of el Nino, volcanic aerosols, and solar variations as was done here. I computed the same adjustment for the Cowtan & Way data. That results in this model of temperature variation (black is monthly data from Cowtan & Way, red is the model):
This enables us to remove the estimated influence of el Nino, volcanic aerosols, and solar variations, to compute “adjusted” data, giving this (annual averages):
Once again, values since 2000 have not wandered outside the range of expectation. The plain truth is that neither the adjusted nor the unadjusted data support the claim that there has been a “pause” which is statistically significant. Yes, there are still fluctuations so any one year isn’t necessarily hotter than its predecessor, but the evidence for a “pause” just isn’t strong enough to declare that such a thing has even happened. The unadjusted data from 1979 to the present indicate warming at an average rate of 0.017 +/- 0.003 deg.C/yr, while the adjusted data indicate warming at an average rate of 0.018 +/- 0.002 deg.C/yr.
Here’s hoping that Cowtan & Way continue to update their data on a regular basis, so it can at least be among the “main” global temperature data sets. I’m also glad that they’ve provided their gridded data so we can study temperature in specific geographical regions.
Reblogged this on Gra Machree and commented:
There is no pause in surface warming.
The oceans warm, rise and acidify.
Global ice retreats.
Species shift polewards and upwards.
There’s an excellent post over at SkS featuring Cowtan and Way’s data set as well. Thanks for this Tamino. I’ve been impressed with their work and wondered what you’d thought of it.
Probably referring to this post today where I broke out the data from Cowtan & Way into El Nino/Neutral/La Nina years (last year I took a similar approach using NOAA data).
Their result has been put in doubt with the “reason” of their relative inexperience in the climate field. Although I believe, that these arguments are not viable, because anyone with a good understanding of the basic processes and good numerical skills can do it right, and the result is absolutely plausible, it is important for the broader public, that their result be confirmed by the big shots, the old hands of climate science. Did that happen?
Wait, didn’t the deniers always insist that climatologists were a) biased and b) not statiticians, and therefore what was really needed was some independent experts to set things straight? And now C&W are too new?
It’s almost as if this is some sort of paper-thin excuse to ignore inconvenient results, or something.
There’s always an excuse. But that’s the fun part, the excuses are typically contradictory. It’s not warming. There’s a pause. If it’s warming, then it is cosmic rays. If it’s warming, then it is the sun. If it’s greenhouse gases, it’s not CO2. If it’s CO2, then the sensitivity is low.
Hence the term “contrarian.”
If you read Cowtan and Way’s paper, or even just their FAQ
you’ll see there are indeed a lot of issues where they must make judgements about datasets, reconstructions, possible biases, etc. So, with all due respect to their work and to their skills, which are clearly impressive, it might indeed be useful to have some cllimatically experienced eyes try to replicate their work.
I heard a comment at the AMS Annual Meeting suggesting that a new study led by Ben Santer revisits the Cowtan and Way methodology. If I understood the comment right, there indeed are issues with extrapolating the satellite data to the surface, and the “pause” returns if the procedure is done correctly. However, I may be misinterpreting part of the comment. In any case, it sounds like the Santer study will be out in Nature Geoscience at some point, so keep an eye out for it.
[Response: It will definitely deserve careful study.]
There was a positive article about the work by Stefan Rahmstorf at RealClimate.
I would imagine that some of the organizations that compile global temperature records are looking hard at this work.
You wrote :
“As I’ve said before, since the Berkeley team released their “methods”
paper I’ve believe that Kriging is the best way to approach the interpolation issue.”
Could you explain your reasoning?
[Response: Kriging is *designed* for exactly that purpose: to interpolate from known data which show correlation.]
I’m working on a lecture on why kriging is good, explained at an introductory level – watch SkS around April/May hopefully. But here’s an outline of part of it:
Suppose you want to know how tall people are on average. So you take a sample of 30,000 men and 3,000 women and measure them. If you take a simple average, you get a biased result, because women are on average shorter. So you need to upweight the women by a factor of 10. The weighted (stratified) average is less biased.
Now suppose you just have 30 men and 3 women. Can you do the same thing? Yes, but with only 3 women there’s a good chance they may not be representative – all shorter or taller than average. If you upweight them by a factor of 10 the noise due to the unrepresentative sample is inflated as well.
An unweighted average introduces least noise, a fully weighted average is least biased. In practice the best estimate (i.e. with the least error) lies somewhere in between.
When it comes to temperature averages, we could weight all cells (or stations) equally – but that leads to a biased estimate. Or we could inflate each estimate to cover the whole area nearer to that station than any other. But that might inflate a single isolated station to cover a whole hemisphere in an extreme case. Kriging naturaly produces an ideal compromise – weighting stations by the area closest to that station in dense regions, and by the area over which they are informative in sparse regions.
That’s not the only benefit – there’s also a covariance effect by which kriging takes into account the amount of independent information in each observation, but that’s a bit more complex.
I’ll be looking foreward to your SkS post.
I hope the main players adjust their data sets so that deniers can’t manipulate them so easily any more… but I’ve always wondered why satellites show reduced warming since 1998 for example this cherry pick
Can someone explain why? I vaguely remember a reason that satellites don’t scan all areas especially pole regions, but then satellites where used to interpolate pole regions in the Cowtan & Way study… or something about satellite not measuring surface temperatures but temperatures just above the surface…
I remember a story about a paper on a correction to satellite data that closed the gap as compared to surface thermometer records but did something come of that?
[Response: One of the results of Foster & Rahmstorf 2011 was that lower-troposphere temperature (which is what the satellite data represent) is affected much more strongly by temporary factors (el Nino southern oscillation and volcanic aerosols) than surface temperature. Therefore the masking of warming by these factors is much stronger in the satellite data than in the surface data.]
Got it… this temporarily dimming sun happened at the worst possible time, just when we need climate action,… but then again what the hell am i saying… as if anyone that makes decisions listens to WUWT lunacies, especially Monckton for Gods sakes… note to self…. really have to stop checking what the inmates at WUWT are on about on any particular day
[Response: I think it’s the recent tendency for la Nina conditions that has really had the main impact. As for decision-makers listening to Monckton, hasn’t he testified as a witness before the U.S. Congress?]
Yes but the ENSO conditions will turn around soon enough, while sun’s dormancy looks like will stick around for many more years, right at the worst time… so I hope the sun influence is on the low side
As for the decision markers, what I meant was I don’t think clowns like Monckton or WUWT convince anyone on the fence, sceptical congressmen are sceptical because they are being lobbied by or have a sake in vested fossil fuel interests… if you read the comments its clear the majority of the inmates at WUWT work or have worked in the fossil fuel industry …
[Response: I’m skeptical about that last claim. Not sayin’ it ain’t so … but I’m skeptical.]
There is a discrepancy between the UAH and RSS satellite data because UAH switched to using data from a newer satellite (Aqua AMSU) and RSS still use data from the older NOAA-15 satellite. The older satellite seemingly has problems with diurnal drift which is not properly corrected. Thus the RSS data for the past few years are lower than the UAH data.
This is explained on Roy Spencer’s blog:
Tamino wrote, “The unadjusted data from 1979 to the present indicate warming at an average rate of 0.017 +/- 0.003 deg.C/yr, while the adjusted data indicate warming at an average rate of 0.018 +/- 0.002 deg.C/yr.”
Wow that seems awfully close to the IPCC prediction of around 0.2 deg C per decade.
“For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios.”
Another pseudo skeptic myth that the IPCC and its models got it all wrong bites the dust.
I found it interesting that kriging was first used to predict the location of ore deposits, I guess everything is related.
Thanks for doing those comparisons!
Yes, we certainly intend to continue updating our reconstructions, at least until the problems we have raised have been dealt with by the major record providers. We’ve also got the version 2 hybrid records coming out soon, probably next week. And the most important update will be a detailed investigation into the differences between our results and GISTEMP. We initially attributed it to the different choice of SST dataset, but it looks like it’s going to be much more complex and more interesting.
One word of warning with respect to the gridded data – we don’t apply any sort of coverage mask to the output. For the long run dataset that means temperatures are provided even for cells which are very distant from any observations – these revert to the estimated global land or ocean mean as appropriate. We’re still thinking about the most convenient way to incorporate this into the released grids.
Just to clarify – some people think we’ve explained the ‘hiatus’. But our work is only one part of the explanation, and only applies to global surface air temperature estimation. El Nino is another, and there is some very important new work coming out shortly. Watch for the Nature Geoscience special issue.
Thanks for the update Kevin, and Robert. I, and I’m sure many of the others who visit Tamino’s site really appreciate the work you’re doing.
The way to really disarm the skeptics would be to overlay the slope of the average prevailing temperature predictions by climate models for each of the past 30 years or so to show how consistent past predictions were with actual temperatures. The problem I’ve encountered trying to convince skeptics is that they claim the slope of projected increase has continued to decline since 1998. Any help here?