Global Temperature in the Air Up There

I’m talking about the temperature in the atmosphere, and specifically the lower troposphere. The troposphere is where most of our weather happens, and for the lower troposphere we have temperature estimates inferred from satellites which measure “microwave brightness,” as well as thermometer measurements from balloons which carry instruments to high altitude and radio their data back to earth (radiosonde data).

By the way: the idea that transforming satellite data into temperature estimates is as simple as sticking a thermometer in your ear (like they do in hospitals these days) is extremely stupid. It’s a lot more complicated than that, which is why different teams that process the satellite data get different results, and those teams keep updating their data with new versions that differ noticeably from previous versions. But it’s the kind of story that sounds convincing to idiots like Ted Cruz.


There are two main teams processing the satellite data: RSS (Remote Sensing Systems) and UAH (Univ. of Alabama at Huntsville). Let’s compare their global average TLT (Temperature in the Lower Troposphere) to what we get from radiosondes (balloon-borne instruments), for which I’ll use the global average data from RATPAC (Radiosonde Atmospheric Temperature Product for Assessing Climate).

First let’s figure out where the “lower troposphere” really is. Here’s a graph from RSS showing the “weighting functions” of the various products they produce:

The lowest of all (nearest the ground) is TLT (temperature of the lower troposphere), and it’s almost entirely below 10km altitude. That’s a pressure level of about 300 mb. That means that if we want to compare this satellite product with balloon data, we should use the RATPAC data for the range from 850 mb to 300 mb; that’s the one closest to the region the satellites cover (and it’s quite close, actually). Fortunately, RATPAC provides a global estimate for the 850-300 mb level, so that’s what we’ll compare the satellite data to.

Here are all three products, showing annual averages for each:

The data from RSS and RATPAC are pretty close. The “odd man out” is the UAH data, which shows a distinctly lower trend than either the RSS satellite data or the RATPAC balloon data.

We can also compare the differences between RSS and RATPAC, and between UAH and RATPAC:

I’ve included linear trend lines (from least squares regression). Note that the RSS minus RATPAC data show very little trend, while the UAH minus RATPAC data show a strong downward trend; UAH isn’t rising nearly as fast as either RSS or RATPAC.

The trend rates for individual products are very close for RSS and RATPAC. The estimated trend (since 1979, when the satellite data begin) for RATPAC data is warming at 1.98 ± 0.5 °C/century, for RSS it’s 1.97 ± 0.42 °C/century. But for UAH it’s merely 1.27 ± 0.4 °C/century. Odd man out, indeed.

Yet for some reason Roy Spencer (one of the scientists who produces the UAH data) keeps saying that his product (UAH) agrees with balloon data better than the RSS product. I have grave doubts about how he pieces together the balloon data to make this comparison.


This blog is made possible by readers like you; join others by donating at My Wee Dragon.


15 responses to “Global Temperature in the Air Up There

  1. The methodology for V6 of UAH was published in the Asia-Pacific Journal of Atmospheric Sciences, a Korean journal, by the looks of it. Does anyone know what the reputation of that journal is? Mears and Wentz (2017) did reference that UAH article but only for comparison (observing that V 5.6 of UAH fits other data better, especially after 2000).

    The UAH article has been cited 3 times (once by Spencer himself) and the RSS article has been cited 6 times. I’m not sure if this is a useful comparison, though.

    Mears has been quoted as saying that the satellite data are the most uncertain and that the surface data have the least error, so should be used for policy decisions. The deniers cling to the UAH TLT data as that’s all they have left.

  2. Tamino,

    I’m curious, are those RSS weighting functions truncation Gaussians or something else? They don’t look like Gammas, which is what I would think would be the first appropriate thing. But they may well have good physical reasons for what they do, or the alternatives may not matter.

    Thanks!

  3. You might look into the pseudoscience that’s being advertised along with your update email. Scamvertising
    “The five foods that are killing your brain”

    [edit: let’s not repeat those links]

    and
    “Drink this before bed, watch your body melt fat like crazy”

    [edit: let’s not repeat those links]

    and such. I don’t know where that dreck is coming from. But they’re clearly searching for credulous readers. Bad company to end up in.

    [Response: I have no control over the ads wordpress puts in. Pity.]

    • I think the links “recommended by powerinbox” might differ for different email recipients. The “links” I got for this post (without URLs) are:

      “Can a person remember being born?” (How Stuff Works)

      “What It’s Like to Fly—And Stall—In the Icon A5 Plane” (Wired)

      “What people in 1900 thought the year 2000 would look like” (The Washington Post)

      • I don’t see any ads at all, probably because I have a WordPress account myself. Perhaps being a ‘fake blogger’ could be a countermeasure to get rid of the dreck?

      • An ad blocker for your browser will prevent a lot of ad material showing up (such as the intermittent ads on you tube vids).

  4. RSS and UAH aligns well up to 1998 and after around 2008. They diverge in the period 1998 – 2008. My guess is that the NOAA 15 satellite is being overcorrected for diurnal drift in the UAH series.

    So there might be av rather simple explanation for the divergence between UAH and other series. Including radiosondes and water vapor.

  5. Good post.
    It’s quite obvious that UAH’s choice of methods and data isn’t supported by independent evidence in the AMSU-era (starting around year 2000).

    I have made a similar comparison using an average of all third generation reanalyses. I also bothered to make TLT weighted data, although the difference to the 850-300 mbar layer is very small.

    The picture is similar to the Ratpac comparison. The reanalysis TLT almost nails the RSS trend, but UAH is 0.06 C/decade too low.

    These both graphs also shed some light on the main issue of the AMSU era, the NOAA-15 vs NOAA-14 controversy.
    The last MSU satellite NOAA-14 and the first AMSU satellite NOAA-15 largely disagree in the overlap 1999-2015.
    UAH simply “believes” that NOAA-15 is right, discard all NOAA-14 data from 2001 and on, and adjust down early NOAA-14 data “by hand” to fit what they believe is right.
    RSS acts scientifically correct, they can’t find any error in either of the satellites, so they keep both and split the error. RSS wants their dataset to be independent of reanalyses and radiosondes, so they don’t use such data for guidance when it comes to significant choices.

    However, look at the period 1999-2005 in the comparisons with reanalyses and radiosondes above. UAH drops like a rock, but RSS is only half wrong since they have split the error. Radiosonde and reanalysis data clearly suggest that NOAA-14 is right and NOAA-15 wrong.

    I believe that the divergence between TMT/TLT satellite data and other upper air data could be more or less reconciled, if the scientific community accepts that NOAA-15 is wrong.
    It’s so simple, the AMSU5(TMT)-sensor onboard NOAA-15 vs everything else.
    Spencer and Christy believe that this particular sensor is right and all other upper air and surface data is wrong, but I guess their thinking is clouded by confirmation bias.

    PS. Satellite data also demonstrates that the AMSU5 sensor on NOAA-15 is wrong. In the AMSU-era we have the neighbour channels AMSU 4 and 6 to compare with:
    https://drive.google.com/open?id=0B_dL1shkWewaSkpnOUxBVGNpWm8

    • I analyzed the UAH data in comparison with the RSS and NOAA STAR data in a presentation at the 2017 AGU meeting. I found that they tend to agree after about 2004, but the UAH data was warmer before, i.e. it had a cooling trend compared with the others previous up to 2004. See my Figures 6a, 6b and 6c in the paper HERE. In Figure 6d, I also compared the earlier UAH v5 with the later V6 and found a similar pattern in which the UAH V6 was warmer than their V5 before 2004. Note that I used data for the North Polar region and the TMT data, not the TLT (aka; LT).

      I also analyzed the Lower Stratosphere (LS) data and found what appears to be evidence of a bias or shift in the UAH data compared with the RSS and the NOAA STAR data. See my Figures 4a and 4b, along with the cross plots in 3a and 3b.

  6. Roy Spencer is trying to claim that global warming (not sure if he believes there is much of that) is not having much effect on hurricanes, or at least on those that make landfall in the US. In the first edition, he was only looking at major hurricanes making landfall in Florida, based on number and wind strength. Florida is a small part of the globe, of course, but he did improve slightly by charting major landfalling hurricanes for all of the US.

    He missed the other major feature of hurricanes: rainfall. Perhaps that was too scary. He also didn’t include non-major hurricanes and tropical storms, both of which can dump enormous amounts of rain, nor did he include those that didn’t make landfall in the US (there is no reason why making landfall in the US is some critical measure).

    He’s also saying that climate change hasn’t significantly warmed the waters of the gulf, with 7 out of the top 10 warmest years (for gulf temps) occurring before 1970. I don’t think he’s used the correct coordinates but it does appear that the gulf was warmer prior to 1880 (though, doubtless with much greater uncertainty), though I don’t know what hurricane activity was like back then.

    He’s certainly a hardened anthropogenic climate change denier! A darling of the so-called skeptic crowd.

    • @Mike Roberts,

      Limiting looks to land-falling hurricanes is a kind of unhelpful censoring that can destroy patterns of importance, in the same way that the fateful meeting before the launch of the Space Shuttle Challenger featured a slide which plotted a crude histogram of O-ring burn-throughs versus air temperature at launch. (Had they plotted degree of O-ring burn-through versus temperature, with severing at the top, versus temperature they would have gotten more events and seen a clear relationship between extreme low and high temperatures and burn-throughs.) It is also common, I have found, for people to limit looks to the classical definition of hurricane season. There is some evidence that hurricane season is both extending in duration and shifting later, but you’d need to check the literature on that.

      I took a look at this back in 2012 in support of a discussion about the land-falling U.S. hurricanes question, and came up with:

      Why the restriction to off season? My hunch was that in-season re-occurrences were buffeted by all kinds of things, like ENSO, and storm-storm interactions, leading to over- or underdispersion. So I wanted to sample from a piece of the annual which was more nearly Poisson. That’s offseason.

      • I later found this:

        But looking at just major hurricanes that hit the United States is not the right way to gauge their activity. That’s because the U.S. coastline is such a small fraction of the overall Atlantic, Caribbean and Gulf of Mexico, where hurricanes brew and at times hit other countries, scientists said. Looking at just those hurricanes “is like using how much it rained in your region on a given week as a measure of how much it rained across the entire country,” said Texas Tech climate scientist Katharine Hayhoe.

        An Associated Press examination in 2017 of how many major hurricanes formed found that the past 30 years had 90 major hurricanes, an average of three a year from 1988 to 2017. That’s 48 percent more than during the previous 30 years. Scientists use 30-year time periods to take natural cycles into account.

        Read more here: https://www.thenewstribune.com/news/business/article220171985.html#storylink=cpy

  7. rhymeswithgoalie

    By the way: the idea that transforming satellite data into temperature estimates is as simple as sticking a thermometer in your ear (like they do in hospitals these days) is extremely stupid.

    I think Peter Sinclair summarized this nonsense pretty well in 2016:

  8. I found this plot, which I thought I’d lost, giving a fascinating insight into the TLT sausage making. It shows the difference in monthly anomalies for the NH Extratropical land average between UAH beta 5 and beta 4. That beta update was the result of some manual tuning to achieve target trends over Greenland and the Himalayas, described by Roy Spencer here. I believe beta 5 is basically the current final UAH v6 version.

    So we can see that the tuning substantially changed seasonality and with distinct temporal differences. The period from 1979 up to the late 90s is the MSU era. From the early 2000s to present is the AMSU era, and in between MSU and AMSU data overlaps. We can see that the change in seasonality was especially strong in the AMSU era and, interestingly, flipped (I marked the Summer months in orange to make that clear). We can also see that the result will be a reduction in annual average trend due to a “step down” in the MSU to AMSU transition.

    Having seen that I thought it would be interesting to compare Summer NH extratropical land averages between the final UAHv6 and Berkeley Earth’s land dataset. You can see there is some reasonable correlation in high frequency variation, but the surface trend is much greater. But then if we make a difference plot between the two… suddenly we see a familiar temporal structure. The difference in trend is primarily caused by a rapid step change over the MSU to AMSU transition. The timing of the discrepancy clearly demonstrates that it is the UAH data which is in error, and it’s a big error.

    In the global annual average the MSU/AMSU step change is less clear but I think still there, maybe reducing the trend by a couple of hundredths. But on a wider point, this substantial regional error was introduced as a consequence of trying to correct a different substantial regional error. It really suggests a quite fundamental problem with how they’re doing things.

    [Response: Good work.]

    • Paulski0:
      The thing that I find most interesting in your first plot (beta4 vs beta5) is that there is a very clear annual cycle in the differences. Even more, in the two periods (MSU, AMSU), the seasonality flips completely. For MSU, the highlighted JJA values are positive differences, whereas for AMSU the JJA differences are the most negative. The amplitude of the annual cycle is also much larger during the AMSU era. Definitely something odd going on.