Making “Adjustments” to Temperature Data

Anthony Watts seems proud of himself, having posted his presentation at the recent “anti-climate” conference of the “Heartland Institute.” He talks mainly about the fact that temperature data are often adjusted before including them in forming a global (or regional) average.

He says that the adjustments are the reason for the apparent rise in temperature that has people so worried. He often implies that un-adjusted data are “truth” and that any adjustment is a violation of its sanctity, together with the implication that those who do so are perpetrating a fraud. It’s standard climate denier talk.

Fortunately, people aren’t buying their brand of snake-oil any more. But the subject at hand — making adjustments to temperature data before including it in global/regional averages — deserves interest, mainly because it’s actually interesting.

I took a sample of data from the USHCN (the U.S. Historical Climate Network), daily temperature for the 31 stations nearest to the town of Chelsea, Vermont (which of course includes Chelsea, VT itself). These are raw data, un-adjusted. I computed temperature anomaly for each of the 31 stations. Then I aligned them (not adjusted, if you’re puzzled by the term “aligned” there’s lots about that on this blog) and formed a regional average temperature anomaly. Here’s what I got:

Not much seems to be changing, mainly because the noise level in daily data is so large, it tends to obscure whatever trend is present. Here are yearly averages of temperature anomaly:

Now we can see that a trend is present, and it’s not too different from what is estimated using “adjusted” data. Remember, this is all raw data, no homogenization, no adjustments, no nothin’.

Of course I could also plot yearly average temperature anomaly for just Chelsea, VT itself:

That’s … weird. Something is different between the earlier and later data, something significant. It’s possible that the town actually suddenly warmed by 2.5 °C, remaining warmer forever — but as we say in Maine, “‘Tain’t likely.”

We can even compare the Chelsea, VT yearly averages to the regional average; I’ll leave Chelsea as red triangles and put in the regional average as black dots:

It’s easy to see how well the year-to-year fluctuations at Chelsea, VT match those in the regional average. It’s also easy to see that the earliest Chelsea data are well below the regional average, then it suddenly shifts to higher than regional. That certainly seems unlikely.

Unless … something happened to the recording system. Maybe they installed a new thermometer, which gave different readings (not all thermometers are created equal). Maybe they moved the station to a different (in this case, hotter) location. There are many things that can cause this behavior, and absolutely none of them have anything to do with actual temperature change in the city of Chelsea, VT or anywhere else. They all have to do with temperature differences between different locations, different instruments, different methods.

Let’s try one final comparison. Let’s look at the difference between the anomalies at Chelsea, VT and those of the regional average. Here are yearly averages:

That early data sticks out like a sore thumb. That’s because it’s the right temperature for the old conditions, but wrong for the new conditions. If you want a temperature record which covers the whole time span at Chelsea, you have to adjust the data to compensate for the difference between those conditions.

The usual way to do so is to calculate the adjustment that will bring the “different” region into best alignment with the rest of the record. These are the adjustments. When we include them, we get a much better record of how temperature changes at Chelsea, VT, not at the old location which is 2.5 °C cooler. If you want the best regional average, this is what you do to all the records.

The surprise to many people, and the bane of climate deniers, is that when it comes to global/regional averages it doesn’t have that much effect on the final result. We’ve been through this before.

The first step of course is to identify when something about the recording situation changed. There are excellent mathematical methods to find such discontinuities between a station’s data and its neighbors’, and in the case of Chelsea, VT they identify more than just the early one. In fact there are at least five different intervals of different behavior:

If we adjust the data to bring those intervals into alignment, we’ll get a better set of data. We’ll be able to form a better regional/global average. When we do so, we’ll find the final result has smaller (i.e. better) uncertainty levels. It’s a win-win-win situation.

The best job of adjustments I’ve seen comes from the Bureau of Meteorology in Australia. They don’t just compute a constant offset between conditions, i.e. they don’t assume that the change is a constant rise or fall all the time. Maybe the new conditions give hotter readings during summer and colder reading during winter. They compute a complex transformation from “before” to “after” based on quantile matching, which I regard as an excellent approach.

Another way to handle the issue, which I think is brilliant, is used by the Berkeley Earth Surface Temperature project. Instead of “adjusting” the early data because early observing conditions were different, simply split the record into two records: “early Chelsea” and “late Chelsea.” Then include both, independently, in your global/regional average, and you don’t have to apply any adjustments.

Of course, one of the first results of the Berkeley Earth Surface Temperature project was that they got the same answer as all the other guys. The ones who used “adjustments.” The numbers weren’t exactly the same of course (different data, different methods) but the essentials were identical.

Thanks to the kind readers who donated to the blog. If you’d like to help, please visit the donation link below.

This blog is made possible by readers like you; join others by donating at My Wee Dragon.


18 responses to “Making “Adjustments” to Temperature Data

  1. There is some danger in being too aggressive with adjustments because the trend can be adjusted away, as well as the offsets. There was trouble with this in trying to adjust tree ring records. See Esper et al (2002) 22 MARCH 2002 VOL 295 SCIENCE

    • The methods used to reconstruct climate from proxy data are not the same as the methods used to reduce the influence of inhomogeneities in station data by comparison with neighbouring stations.

      If you do a bad job, because the methods are not state-of-the-art or because there are few neighbouring stations, you will on average not change the trend of the raw data (although it may be wrong). If you do a perfect job you will on average get the actual climatic trend. In reality you will be somewhere in the middle and on average undercorrect any problems.

  2. Mitch

    Here is another example of an ‘adjustment’ made within the GHCN V3 data set :

    What you see here is a weather station in Prague, Czechia. It had a negative trend until the moment where it was discovered that during decades, the station reported about 1.9 °C too much.

    That is the reason why there was, for that station, a sudden trend difference between the unadjusted and adjusted variants of GHCN V3.

    Is that too ‘agressive’ in your mind?

    Should the people managing the temperature data for these stations better ignore all that?

  3. Esper et al showed that adjusting tree ring records too much could get rid of signal as well as noise. They produced a 1000-year northern hemisphere temperature record with more amplitude then the Mann et al reconstruction.

    In your example, there was only 1 adjustment in a 140 year record, and was at a major apparent temperature change around 1938. I would expect it is justified. However, if you did 10 or 20 baseline adjustments over that same time period, the warming would go away because you keep moving the mean.

  4. Here is my boilerplate response to denier “NASA cooled the past by tampering with the data” claims:

  5. My boilerplate response to deniers who claim that NASA “manipulates data to cool the past”.

    [Response: Very well done.]

    • Thank you, Tamino, for the post, and thank you, caer bannog. These two pieces will enable me to whack many moles.

  6. Apologies for the twitter-linked duplicate post (@caerbannog666 December 6, 2019 at 1:25 am).

    I thought it got “lost in the ether”, so I tried reposting the material via a direct WordPress login. Feel free to delete that @caerbannog666 duplicate post (and this one as well).

  7. Jeffrey Davis

    Watts should tell Siberian permafrost that it shouldn’t be thawing.

  8. Michael Sweet

    The largest adjustments were to increase old sea surface temperatures which makes the slope lower. The old buckets were canvas and cooled as the temperature was measured. If you use unadjusted global temps the slope is lower than scientists report. Watts only uses land temperature to hide this.

  9. Zeke Hausfather provides a very useful account on the Carbon Brief website of the adjustments made to temperature records, here:
    The take home messages from this account are as follows:
    i) adjustments to land surface temperature data on a global scale slightly increase the rate of warming since 1880.
    ii) adjustments to sea surface temperature data on a global scale increase the rate of warming since 1880 to a far greater extent than land data reduces it.
    iii) the overall effect of adjustments to the global land/ocean is to REDUCE the rate of global warming since 1880, specifically,
    iv) adjustments to the temperature series since 1970 have the GLOBAL warming trend just 4% warmer than the raw data
    v) adjustments to the temperature series since 1880 have DECREASED the GLOBAL warming trend (20% slower warming than the unadjusted figures) as shown in the graph posted by caer bannog above, which is also shown in the Carbon Brief article. This graph is also available here:

    Serial deceivers and misinformers, such as Anthony Watts in the address mentioned by Tamino above, and in numerous YouTubes by Tony Heller, bang on endlessly about adjustments to LAND surface temperatures (which generally do result in an increase in warming), but they NEVER mention the SEA surface temperature adjustments, that do the opposite, or that the GLOBAL adjustments result in a LOWER rate of warming. This, of course, means that the whole “fraud” accusation falls flat on its face.
    I have been making this point on numerous occasions (under a different moniker) to counter the endless repetition of misinformation by the likes of Watts and Heller and others. It would be helpful if more folk did likewise, otherwise the endless (but baseless) accusation that adjustments fraudulently exaggerate global warming will persist.

  10. Woops. Sorry, ii) above should be:
    ii) adjustments to sea surface temperature data on a global scale REDUCE the rate of warming since 1880 to a far greater extent than land data increases it.

  11. OK, so I watched the Watts video so that the rest of you don’t have to. What really struck me was his lack of professional growth over the decade+ that he has been “analyzing” temperature data.

    To see what I mean, watch about 30 seconds of the video starting here:

    He *still* clearly doesn’t understand the concept of baseline/anomaly processing. Clearly doesn’t understand why the temperature *trend* is more important than absolute temperature values.

  12. caerbannog’s superb comments remind me a lot of things.

    This year, Anthony Watts raged in a typical head post over a poor weather station at Anchorage’s International AP, which in his view indicated an impudent 2 °C too much, compared with other stations around it.

    Luckily for Anthony’s victim, help came from a station nearby, Kenai, located in the middle of nowhere. Kenai is moreover one of these superpristine Climate Reference Network stations, certainly above any suspicion concerning UHI:'30.1%22N+150%C2%B019'00.1%22W/@60.8014057,-151.0363044,164484m/data=!3m1!1e3!4m5!3m4!1s0x0:0x0!8m2!3d60.5917!4d-150.3167?hl=en
    What about comparing the two?

    I did that using only NOAA’s GHCN daily data set, in order to avoid an eventual comparison bias due to a different CRN processing of the stations’ raw data.

    Using absolute temperatures lets you understand what Anthony misunderstands all the time:

    Using anomalies might help him, but unluckily they are for him ‘Teufelswerk’:
    Interesting comparison, however. As all the other USCRN stations are in GHCN daily as well, one can extend this comparison to the whole CRN stuff vs. all these for sure horribly UHI infested GHCN daily stations:

    Hmmh. Interesting comparison again, where is that bloody UHI?
    But… alas! CRN starts in 2004. We need ‘good old data’.
    NOAA published years ago a list of 71 ‘well-sited’ USHCN stations, selected by volunteers for Mr Watts’ hobby corner ‘’:

    All these USHCN stations can be found within GHCN daily too, and thus you may compare the average of this selection with that of all CONUS GHCN daily stations (nearly 19,000 for the entire period):

    Wow! I had expected the entire set to show a much higher trend (you know: ‘due to …’), but the result shows the inverse, the 71 ‘well sited’ stations show on average the higher trend in °C / decade:
    – 1900-2019: 0.07 > 0.03 ( ± 0.007)
    – 1979-2019: 0.25 > 0.20 ( ± 0.034)
    Now coming back to commenter caerbannog: GHCN daily has no airport flag for stations, but it was easy to select (a vast majority of) the airport stations (about 800), and to generate a monthly time series:

    This is now a really amazing result: even the difference between the ‘well sited’ stations and those ‘worst sited’ ones located within airports, is incredibly small as well.

    Ha. We all know that since quite long a time, but it’s worth a recall :-)