Anthony Watts seems proud of himself, having posted his presentation at the recent “anti-climate” conference of the “Heartland Institute.” He talks mainly about the fact that temperature data are often adjusted before including them in forming a global (or regional) average.
He says that the adjustments are the reason for the apparent rise in temperature that has people so worried. He often implies that un-adjusted data are “truth” and that any adjustment is a violation of its sanctity, together with the implication that those who do so are perpetrating a fraud. It’s standard climate denier talk.
Fortunately, people aren’t buying their brand of snake-oil any more. But the subject at hand — making adjustments to temperature data before including it in global/regional averages — deserves interest, mainly because it’s actually interesting.
I took a sample of data from the USHCN (the U.S. Historical Climate Network), daily temperature for the 31 stations nearest to the town of Chelsea, Vermont (which of course includes Chelsea, VT itself). These are raw data, un-adjusted. I computed temperature anomaly for each of the 31 stations. Then I aligned them (not adjusted, if you’re puzzled by the term “aligned” there’s lots about that on this blog) and formed a regional average temperature anomaly. Here’s what I got:
Not much seems to be changing, mainly because the noise level in daily data is so large, it tends to obscure whatever trend is present. Here are yearly averages of temperature anomaly:
Now we can see that a trend is present, and it’s not too different from what is estimated using “adjusted” data. Remember, this is all raw data, no homogenization, no adjustments, no nothin’.
Of course I could also plot yearly average temperature anomaly for just Chelsea, VT itself:
That’s … weird. Something is different between the earlier and later data, something significant. It’s possible that the town actually suddenly warmed by 2.5 °C, remaining warmer forever — but as we say in Maine, “‘Tain’t likely.”
We can even compare the Chelsea, VT yearly averages to the regional average; I’ll leave Chelsea as red triangles and put in the regional average as black dots:
It’s easy to see how well the year-to-year fluctuations at Chelsea, VT match those in the regional average. It’s also easy to see that the earliest Chelsea data are well below the regional average, then it suddenly shifts to higher than regional. That certainly seems unlikely.
Unless … something happened to the recording system. Maybe they installed a new thermometer, which gave different readings (not all thermometers are created equal). Maybe they moved the station to a different (in this case, hotter) location. There are many things that can cause this behavior, and absolutely none of them have anything to do with actual temperature change in the city of Chelsea, VT or anywhere else. They all have to do with temperature differences between different locations, different instruments, different methods.
Let’s try one final comparison. Let’s look at the difference between the anomalies at Chelsea, VT and those of the regional average. Here are yearly averages:
That early data sticks out like a sore thumb. That’s because it’s the right temperature for the old conditions, but wrong for the new conditions. If you want a temperature record which covers the whole time span at Chelsea, you have to adjust the data to compensate for the difference between those conditions.
The usual way to do so is to calculate the adjustment that will bring the “different” region into best alignment with the rest of the record. These are the adjustments. When we include them, we get a much better record of how temperature changes at Chelsea, VT, not at the old location which is 2.5 °C cooler. If you want the best regional average, this is what you do to all the records.
The surprise to many people, and the bane of climate deniers, is that when it comes to global/regional averages it doesn’t have that much effect on the final result. We’ve been through this before.
The first step of course is to identify when something about the recording situation changed. There are excellent mathematical methods to find such discontinuities between a station’s data and its neighbors’, and in the case of Chelsea, VT they identify more than just the early one. In fact there are at least five different intervals of different behavior:
If we adjust the data to bring those intervals into alignment, we’ll get a better set of data. We’ll be able to form a better regional/global average. When we do so, we’ll find the final result has smaller (i.e. better) uncertainty levels. It’s a win-win-win situation.
The best job of adjustments I’ve seen comes from the Bureau of Meteorology in Australia. They don’t just compute a constant offset between conditions, i.e. they don’t assume that the change is a constant rise or fall all the time. Maybe the new conditions give hotter readings during summer and colder reading during winter. They compute a complex transformation from “before” to “after” based on quantile matching, which I regard as an excellent approach.
Another way to handle the issue, which I think is brilliant, is used by the Berkeley Earth Surface Temperature project. Instead of “adjusting” the early data because early observing conditions were different, simply split the record into two records: “early Chelsea” and “late Chelsea.” Then include both, independently, in your global/regional average, and you don’t have to apply any adjustments.
Of course, one of the first results of the Berkeley Earth Surface Temperature project was that they got the same answer as all the other guys. The ones who used “adjustments.” The numbers weren’t exactly the same of course (different data, different methods) but the essentials were identical.
Thanks to the kind readers who donated to the blog. If you’d like to help, please visit the donation link below.
This blog is made possible by readers like you; join others by donating at My Wee Dragon.