Comparing Temperature Data Sets

In light of Anthony Watts’ latest idiocy comparing GISS and UAH temperature data without bothering to put them on the same scale, I thought it might be interesting to compare different temperature records … but let’s do it right, eh?

There are 5 major sources of global temperature data which are most often referred to. Three of them are estimates of surface temperature, from NASA GISS (Goddard Institute for Space Studies), HadCRU (Hadley Centre/Climate Research Unit in the U.K.), and NCDC (National Climate Data Center). The other two are estimates of lower-troposphere temperature, from RSS (Remote Sensing Systems) and UAH (Univ. of Alabama at Huntsville). All are anomaly data, i.e., the difference between temperature at a given time and that during a baseline period. They tend not to be on the same baseline; for GISS the baseline is 1951 to 1980, for HadCRU it’s 1961 to 1990, for NCDC it’s the 20th century, and for satellite data the baseline is 1979 to 1999. Since they use different baselines, they’re on a different scale, i.e., each has its own zero point for temperature. To compare them, we need to use the same zero point for all.

They also don’t cover the same time span. HadCRU starts first, beginning in 1850. GISS and NCDC both start in 1880. And the satellite data don’t start until December 1978 (for UAH) or January 1979 (for RSS). You can download the data yourself; links to data sources are found here. Because of this, the RSS and UAH data cannot be put on the baseline used by any of the surface-temperature data sets because the satellite data don’t cover those time periods. Of course we can only compare them for those times they all have data. And to put them all on the same scale, we’ll have to use a baseline period which is covered by all.

All 5 data sets cover the period 1979 to the present, although HadCRU hasn’t yet published their results for November 2010, so the period of common coverage is January 1979 to October 2010. Here’s the raw data (each with its own baseline period):

We can smooth the month-to-month fluctuations by using a 12-month moving average filter, giving this:

Now we can plainly see that all they tell much the same story, in terms of the temperature changes over time. Which is what anomalies are meant to reveal.

But we can also see the result of using different baselines. GISS and NCDC are nearly the same, because the average for the GISS baseline period (1951-1980) is nearly the same as that for the NCDC baseline (20th century). HadCRUT3v is lower because it’s baseline period (1961-1990) is warmer (so it’s compared to a warmer reference). Finally, the satellite data sets are lowest because their baseline period is warmest.

For proper comparison we should choose a common baseline for all five data sets. I chose the period 1980.0 to 2000.0, which gives this for the monthly data:

and this for the 12-month running means:

Note that now the different data sets are in much closer numerical agreement. They all show warming during the coverage period, and they all show fluctuations superimposed on the warming trend. But the satellite data sets show greater fluctuations, especially during el Nino events (e.g. 1998) and la Nina events (2008), and during the coolings associated with volcanic eruptions (El Chicon in the early 1980s and Mt. Pinatubo in the early 1990s).

Therefore the most prominent pattern in the data appears to be that which is shared by all: an overall warming trend, and warming in response to el Nino, cooling in response to la Nina and volcanic eruptions. The 2nd-most prominent pattern appears to be the difference between the satellite data sets (RSS and UAH) and the surface-temperature data sets (GISS, HadCRUT3v, and NCDC).

We can test that idea by performing a principal components analysis of these data sets. The 1st principal component accounts for 90% of the variance of the data, so it dominates the fluctuations. It turns out to be nearly equal to the average of all five data sets, and the signal associated with it (the 1st empirical orthogonal function or EOF) is, just as we expected, the warming-with-fluctuations which is common to all (I’ve scaled it so that it’s on a “temperature” scale):

All 5 data sets agree: the globe is warming.

The 2nd principal component accounts for 7% of the total variance, which is most of the remainder after accounting for the 1st principal component, and confirms our intuition that the 2nd-most prominent pattern is the difference between satellite and surface-temperature data. Here’s the actual 2nd principal component vector (the “loadings”):

GISS: -0.413334
HadCRUT3v: -0.339689
NCDC: -0.420948
RSS: +0.457027
UAH: +0.572448

Note that the satellite data sets have positive coefficients while the surface-temperature data sets have negative coefficients. Hence the EOF associated with this PC is very similar to the difference between the satellite average and the surface-temperature average, and looks like this:

We can compare that to what results from subtracting the average of surface temperature estimates from the average of satellite measurements:

The biggest difference between the satellite-minus-surface data and PC#2 is that PC#2 shows an additional downward trend. This is mainly because one of the satellite data sets (UAH) shows an overall trend which is decidedly less than that of the other data sets.

We can plainly see the highs during the 1998 and 2010 el Ninos, and the lows during the 2008 la Nina as well as the volcanic coolings in the early 1980s and early 1990s. This indicates that the satellite data (i.e., the lower-troposphere temperature) responds more strongly to the influence of el Nino/la Nina and to volcanic eruptions, than does the surface temperature.

An interesting result is that for PC#5:

Although it accounts for the least total variance of the data (a mere 0.3%), it shows fluctuations which suggest an annual cycle. Its presence is confirmed by a Fourier analysis of PC#5:

We see a peak at frequency 1 cycle/yr (period 1 yr) together with its harmonics at 2, 3, and 4 cycles/yr. So, not only is there an annual cycle in PC#5, its form is not simply sinusoidal. We can see the cycle shape by making a folded plot (a.k.a. “phase diagram”), graphing temperature not as a function of time but as a function of phase, i.e., time of year (as is customary, I’ve plotted two full cycles of phase:

Here is the actual principal components vector (the “loadings”):

GISS: -0.087689
HadCRUT3v: -0.694439
NCDC: +0.713031
RSS: +0.013106
UAH: +0.038469

All but 2 of the coefficients are very small, so PC#5 turns out to be mainly the difference between NCDC and HadCRUT3v. Hence we see that their difference shows an annual cycle, because during this time span NCDC is warmer in winter and cooler in summer than HadCRUT3v, although there’s also a “dip” in January-February compared to December and March.

This illustrates that although the choice of baseline period makes no difference when computing the trend (i.e., the rate of global warming), it does make a difference when estimating the annual (seasonal) cycle. When anomalies are computed, not only does it set the “zero point” of temperature to the baseline average, it also removes the annual cycle from the data. But it removes the average annual cycle during the baseline period. If the annual cycle changes, then the difference between “present” and “baseline” annual cycles will remain — a “residual” annual cycle. PC#5 shows that the residual annual cycles in NCDC and HadCRUT3v are different — hence a difference in annual cycle “remnants” is found in PC#5.

A point of much interest is the trend, i.e., the warming rate, shown by each series. We can compute them for each data series separately, and also compute uncertainty levels for those estimates (which are corrected for the influence of autocorrelation, confidence intervals are 2-sigma):

They’re all close, all within each others’ confidence intervals, and they’re all definitely positive (warming). However, the UAH trend estimate is visibly lower than that of the others — if any of the series should be called the “odd man out,” it’s the UAH data.

For some reason “the Blackboard” has an obsession with trends over the most recent 10-year period. Here they are (plotted in blue), compared to the trend over the entire time span common to all data sets (plotted in red):

None of the 10-year trends is “statistically significant” but that’s only because the uncertainties are so large — 10 years isn’t long enough to determine the warming trend with sufficient precision. Note that for each data set, the full-sample (about 30 years) trend is within the confidence interval of the 10-year trend — so there’s no evidence, from any of the data sets, that the trend over the last decade is different from the modern global warming trend.

When one compares the different global temperature data sets correctly, one result emerges more strongly than any other: that they agree. This puts the lie (yes, lie) to claims of “fraud” by climate scientists to rig the surface temperature data.

And what do all the data sets agree on? Mainly this: global warming.


Here are the data, for their period of overlap, as an Excel file:



49 responses to “Comparing Temperature Data Sets

  1. Very interesting, indeed.

  2. Here’s a dream come true for those who have unending problems with the GISS analysis: behold, the world without GISS!

    Oh wait, it’s the same world…

  3. Thank you–this will be linked, as this covers ground that I have to go over and over and over and. . . OK, we all get the picture, I’m sure. (And that’s just with one guy. . . !)

    And the point about the “residual annual cycle” is (for me at least) intriguingly unexpected, though logical.

    Off-topic, but I’ve a new article out today, sort of a thematic science summary piece. Comments, suggestions and corrections are particularly solicited as this is a little more scientifically ambitious than previous pieces I’ve done, and chances for significant scientific screw-ups by this layman are presumably enhanced accordingly. So let me know, before I join “the Dark Side” to misinform, too–albeit inadvertently. . .

  4. A general question regarding these data sets and the buoy ship bias, discussed here:

    Does this bias affect HadCRU, GISS, and NCDC evenly?

    [Response: I don’t think so, but I’m not sure.]

  5. Cool post Tamino. A couple of questions. Should the variance of the data sets be standardized? Would this make any difference?

    [Response: Since they’re all measuring the same thing (temperature), I think not. That’s why I ran the PCA without normalizing the data (but yes, I did center them). I also did PCA including normalization, and I detrended the series before PCA, neither had much effect (the order of the low-variance PCs was changed but the PCs themselves were nearly identical except of course for trend).]

  6. If you weight the 5 different means equally (and assume the sets are independent) and perform a t-test, you probably do get 2-sigma significance. Guessing the data from the image gives p = .03. Obviously, this isn’t going to convince anyone who doesn’t understand the baseline effect. The samples also aren’t independent; the significant noise is natural variation and not measurement error. So, you can say with confidence that it’s warmed over the past decade.

    Also, why compute the trend with a linear fit? You have plenty of data to do so with a high degree of certainty. Fitting the GISS series, estimating error by bootstrapping, the linear component is 0.003 +/- 0.08 degrees/century and the quadratic component is 0.44+/- 0.07 degrees/century^2.

    From the GISS data, you can say “the earth is currently warming at 1 degree/century, and it’ll be getting warming twice that fast a year from now if we don’t do anything” with a good deal of confidence. I think there’s some value in getting beyond the slam-your-head-on-the-wall “debate” over warming exists and communicating that it’s getting faster each year we do nothing. Note that this is only using the yearly GISS data through 2007 so the numbers are too low.

    [Response: I find that a quadratic trend is not statistically significant.]

  7. Tamino,

    The focus on 10 year trends in this particular case was simply used to highlight the irony that UAH and GISTemp have roughly the same trend over the period, given the focus of Watts and others on comparing the two every month.

    I agree with you that the last decade really doesn’t tell you that much about the long term trends, given the size of the error bars, but it does allow for some interesting analysis of the difference between individual temperature records during that period (e.g. ENSO responses of satellites vs. surface measurements, effects of different ways of treating arctic temperatures, etc.).

    Regardless, there is a good reason why climate tends to focus on long-term trends!

  8. Horatio Algeranon

    As shown on the above graphic, you can’t be very certain about short term trends but, by all indications, you can be certain that someone (somewhere) will continue the obsession with them.

    …just as you can be certain that someone somewhere will continue the confusion over different baselines.

    …and continue to insist that arctic sea ice is making a recovery

    …and continue to claim that the atmospheric CO2 increase is not due to humans.

    It appears that Benjamin Franklin was actually wrong: taxes and death are not the only things in life that are certain.

    [Response: Reminds me of the Einstein quote, that only two things are infinite — the universe and human stupidity — and we’re not sure about the universe.

    But as Zeke Hausfather has pointed out, the computation of 10-year trends in the latest “Blackboard” post is to illustrate their irrelevance.]

    • Horatio Algeranon

      [the computation of 10-year trends in the latest “Blackboard” post is to illustrate their irrelevance.]

      To illustrate their “irrelevance” (with regard to long term term trends specifically and climate generally) it would seem important to show (graphically, as above) or at the very least specify the uncertainty associated with each calculated trend — as opposed to just giving the “bare” trends.

      When the uncertainties are large (relative to the size of the trends and the differences between them), it’s not clear how one can draw any meaningful comparisons between the (apparent) trends — other than to point out, as you have, that they are “all within each others’ confidence intervals” and that “there’s no evidence, from any of the data sets, that the trend over the last decade is different from the modern global warming trend.”

  9. What are the loadings for the first principal component?

    [Response: They are:

    GISS: 0.453319
    HadCRUT3v: 0.399349
    NCDC: 0.410204
    RSS: 0.495564
    UAH: 0.470288

    The loadings are “normalized” in the sense that their sum-of-squares equals 1 (so the PC is a unit vector). But for plotting the 1st EOF, I re-normalized them so the sum equals 1 — this doesn’t affect the *shape* of the EOF but does affect its variance (which is what I meant by “I’ve scaled it so that it’s on a “temperature” scale”).]

  10. Does anyone know of some decent articles investigating or reviewing why the microwave sounders are so sensitive to ENSO and the volcanic forcing?
    Is there something specific about some of those channels that amplify the changes? Or maybe the surface record is biased somehow? My gut would lead me toward the former…but I’d rather learn from the experts than listen to my gut ;)

    • I was starting to look into this semi-formally a while ago – I suspect the real problem is that the microwave units are still not fully accounting for long-term drift/calibration issues properly. ENSO is short-term, and so they’re seeing the predicted mid-tropospheric amplification there (if you do a tropical cut you’ll see a greatly enhanced amplification for the ENSO/volcanic short-term peaks and troughs in the satellite data vs surface).

      There’s no reason I can think of that long-term and short-term response of the troposphere to warming/cooling events (at least between the scale of a few years to a few decades that’s the issue here) would be any different. So it seems to me highly likely that the satellite data is still not properly calibrated to determine long-term trends. It was corrected a few years back with the introduction of the RSS analysis, but I suspect even that is not properly accounting for things. There are other satellite analyses that have found different numbers but I’ve not had a chance to look into that recently at all…

      • Gavin's Pussycat

        Arthur, do you mean that we’re looking at the combination of a spurious trend contribution and a wrong scale factor?

      • Well, not necessarily a “wrong” scale factor – the satellites (at least MSU-based) measure something different from surface temperature – they measure emissions from throughout the atmosphere, and use various techniques to figure out the contributions from different altitudes. The trends usually shown as UAH or RSS are nominally from the “lower troposphere”, but that’s still not the surface. Because lapse rate (particularly in the tropics) should decrease under warming conditions (this is actually the most important negative feedback), the troposphere should warm faster than the surface.

        We see that in the short-term responses as the above graphs show. Why don’t we see that amplification over the long term in the UAH and RSS records? It seems almost certain this is a long-term calibration issue.

      • You might want to look into the new STAR analysis from NOAA. It’s probably the most sophisticated. Interestingly, its “TMT” channel is warming at the same rate as UAH’s synthetic “TLT” channel.

    • Take a look at Trenberth and Smith The basic idea is that increased moist convection lifts a lot of warm water vapor in the tropics (T&S) up into the troposphere at levels where the MSU are sensitive to it (Eli’s ansatz). This can also be seen in a presentation from Wentz

  11. The data files seem to be in an unfriendly form. Might it be possible to get the data from you in a friendly form so I can try some Python PCA code I have on the data?

    [Response: I’ve added an update (at the end of the post) with a link to an ExCel file containing the data. It covers the time of common coverage, and the columns are labeled (with obvious names like “giss” and “rss”). Labels starting with “z” (like “zgiss” and “zrss”) are the data reset to baseline 1980.0 to 2000.0]

    • GregH

      I regularly update a csv file with the 5 temperature anomalies as well as NINO34, SSTA, PDO and AMO each month at this link .

      The data file includes monthly values for each series since 1880. I also have a series of RClimate tools using R to help fellow citizen scientists do their own climate trend analysis link.

  12. This doesn’t affect your results, but I think you have your terminology switched around: The term “eof” refers to the loading (which is usually a spatial pattern in these analyses), and “pc” can mean either the time-series or the combination of time-series + eof.

  13. Pete Dunkelberg

    Thanks for laying all this out. I hope I can learn a couple of even more basic things. Isn’t there another temperature database from NOAA, slightly different from NASA GISS for some reason? And isn’t there also a Japanese database? It would be very nice to have another country represented.

  14. PCA is confusing enough when you apply it to simple, low-dimensional data. When “data points” are actually timeserie, it’s hard to get a mental picture of what’s going on :(

  15. Thank you Tamino! I find this analysis very educational, myself coming from a quite different speciality. I wonder if the Fourier method of harmonics breakdown would have any practical use in analyzing the annual Arctic sea ice extent cycle and its (future) deviations.
    I should also think a compact 3D presentation of the various climate datasets would be impressing. Make it rotatable even? Any takers?

  16. Tamino, great post, any chance of linking to the R code as well? (The graphs look R-ish to me).

  17. The last plot is definitely the best by far. It is particularly handy and puts the trends calculated from different time series into perfect context to understand why the “skeptics” are wrong.

  18. I’m trying to duplicate your results, and am having an issue.
    I’m using python, and created a 5,382 array to hold my data,
    I suff in the five temp series (after subtracting the mean), and compute a covariance matrix.
    I then call a SVD on the covariance matrix, and compute my
    PC’s by v*matrix, where matrix is the 5,382 array.
    The scaling from the s vector says the first PC has 90 percent of the
    variation, so far, so good, but the PC I calculate is basically off in scale. So that argues I forgot a step. Do you have an idea of what I may have forgotten? I could post my python code if that would help.

    [Response: As I mentioned earlier, I re-scaled the 1st PC (i.e., I inserted an extra step) so that it would be on a “temperature” scale. Try dividing your 1st PC by the square root of 5, see whether it matches mine.]

  19. Michael Hauber

    Another points worth mentioning when comparing temperature series is that there was some sort of instrument change in the satellite data around 1992. If you plot RSS – Uah there is a clear step change in this year. The trend for Uah since 1992 is much closer to the others than for the period since 1980

    [Response: There have been a dozen or so satellites which combine to make the lower-troposphere data record, and merging their data is not a trivial problem. RSS and UAH disagree on how to merge the data, esp. for the step change you mention.]

  20. Is there an active open thread for off topic questions? I have some questions based on a couple posts at Coby Beck’s blog.

    [Response: I’ll start a new one.]

  21. And what do all the data sets agree on? Mainly this: global warming.

    Oh really? If you take into account the two major volcanic eruptions in the beginning of the observed period which had an approximately five year cooling effect each you will find that there has actually been almost no warming since 1980.

    [Response: I HAVE taken into account the el Chicon and Mt. Pinatubo volcanic explosions, as well as the El Nino/Southern Oscillation, and the warming is still there and just as strong.

    Here’s my guess: you haven’t even analyzed the data, and you probably don’t have a clue how to. You just repeated some bullshit you heard from some other bullshitter.]

    • Oh, that’s funny. Something caused cooling, so the warming is just…. our eyes playing tricks? Grumpy must be grumpy because he can’t think logically.

      • Or to quote the traditional British Civil Service school of Obstructionism, ‘Based on your conclusions, I disagree with your premises’.

  22. Thanks for the analysis, Tamino. I’m curious as to how you corrected the CIs for autocorrelation. Or could you point me toward a particular resource?

  23. I’m trying to find out how Spencer defines the different regions that are listed in his lower troposphere dataset. Does anyone know?

    As an aside, I’d sure like to see these simple time series ASCII files made available as netCDF with full metadata. Heck of a lot more useful that way.

  24. I find this comparison of data sets very interesting as I have being doing the same thing myself for a few years now. I find it amazing that some people still don’t understand that the various sets are all relative to different base periods. However I disagree that it is not possible to put UAH and RSS on the same base line as the older series. All you have to do is find the differential between the older set and the satellite series for the same period. For example I do my comparison between HadCRUT3 and UAH by finding the average HadCRUT3 anomaly for the UAH base period and adding that to the UAH anomaly, and so on. I do all of my comparisons relative to HadCRUT3 and it produces identical relative anomalies to your own. What is needed of course,
    to avoid future confusion, is to agree on a common base period for all of the datasets.

  25. Yes, but they are relative to 1980 to 2000. My point was that it was possible to standardise them to 1961 to 1990.
    Also, what I meant was that the originators of the datasets should standardise their base periods to avoid the necessity of doing conversions, and confusion amongst those who are unaware of the differing base periods.

  26. “the originators of the datasets should standardise their base periods”

    But then there would have been ever so much less hilarity over to Anthony’s place, back in the ‘what’s an anomaly?’ days..

    Sorry, Ray – the people who do and report serious science, typically aren’t all that concerned about whether it will be understood by ‘auditors’ who can’t bother to understand the data and can’t add a constant to a dataset. Nor should they need to be.

  27. I don’t know if it signifies anything, but the current satellite readings are moving up quite rapidly compared to January to March timeframe. We seem to be coming out of this La Nina very quickly. The UAH report for June should be very interesting… we might see the second or third highest June anomaly for the satellite readings. Only the El Nino years of 2010 and 2007 would be higher than June 2011, leading to a surprising result for a year that started with a reasonably strong La Nina.

    As we get data on more ENSO cycles, it will be interesting to see the statistics for the impacts of these events. In particular, the satellite temperature models seem more sensitive to the ENSO cycles.

  28. Oops, forgot about 1998…. So it appears that June 2011 could be in line for the fourth highest satellite reading behind the El Nino years of 1998, 2010, and 2007.

  29. Zinfan94,
    What about 2002, 1991 and 2005, with June figures all above 2007:
    1998 6 0.52
    2010 6 0.39
    2002 6 0.32
    1991 6 0.28
    2005 6 0.22
    2007 6 0.16
    My estimate is somewhere between 2007 and 2005.

    • I don’t have the ability to track it the way you guys do. I’ve been following it this way on the NOAA website, year-to-date:

      January, 2011 – 17 warmest in record
      Jan thru Feb – 16th
      Jan thru Mar – 14th
      Jan thru Apr – 14th
      Jan thru May – 12th

      Can June move it in range of top 10?

  30. Tamino: I am impressed by your efforts to get an accurate comparison of major temperature curves in use. But I still would not have bothered to respond to you until I noticed your graph “Baseline 1980.0 to 2000.0“ that combines GISS, HadCRUT3v, NCDC, RSS and UAH all in common coordinates. I have used it as a baseline for interpreting the global temperature history during the satellite era. I find it fascinating that all five sources plot essentially identically when the scale and baseline are correctly adjusted. This surprised me because with some versions of the land-based data I have seen there are systematic deviations from satellite views. It does not seem to be the case with the versions that you have at your disposal. Unfortunately your own interpretation of what is going on is misled by some a priori ideas that prevent you from seeing the important facts contained in these data. You will see what I mean when I finish my analysis. The first thing you should know is that any further data manipulation, such as PC analysis or using a 12 month running mean, contributes nothing to an understanding of these data and actually destroys information. For this reason I started by using a magic marker just wide enough to incorporate the monthly variability that is more or less constant throughout this period. This variability must be considered real and lone spikes sticking out above and below it are most likely erroneous readings. I do not use the magic marker on the 1998 super El Nino because of the large random errors in ground based data. Once the trend is marked out with a magic marker ENSO peaks and valleys become recognizable. Peaks you see are El Ninos, valleys in between are La Ninas. There is a set of them in the eighties and nineties and a new set starting on the right side of the graph. But the super El Nino of 1998 is a free agent, not part of ENSO, and most likely caused by a storm surge near the Indo-Pacific Warm Pool. ENSO itself is a physical oscillation of ocean water from side to side in the equatorial Pacific. An El Nino peak is formed when an El Nino wave crosses the Pacific along the equatorial countercurrent, runs ahore in South America, spreads out north and south, and warms the air. A La Nina is formed when an El Nino wave retreats, ocean level drops by half a meter behind it, and cold water from below wells up to fill the gap. Contrary to what you have been told ENSO oscillations are not influenced by any volcanic cooling. The myth of Pinatubo cooling started with Best in 1996. He found that Pinatubo eruption was followed by a global temperature drop, did not know anything about ENSO oscillations, and pronounced the observed cooling to be volcanic. Everyone has followed him. But he was also aware that the eruption of El Chichon was not followed by cooling, a contradictory behavior he could not explain. The explanation is actually simple: there is no such thing as volcanic cooling of the troposphere. Volcanoes eject their aerosol cloud directly into the stratosphere where they cause warming at first and then cooling a few years later. Pinatubo erupted when a La Nina cooling was about to start, and that particular La Nina that formed was pronounced to be Pinatubo cooling. But El Chichon erupted when an El Nino warming was about to start and he could not find any convenient La Nina that could simulate volcanic cooling. Knowing now that there is no contamination from volcanic cooling mixed in with ENSO oscillations we can go ahead and place colored dots in the middle of each line connecting an El Nino peak and its adjacent La Nina valley. Connecting the dots should theoretically give a horizontal straight line. In practice there is some irregularity and it is necessary to draw a straight horizontal line that best fits the dots. That would be the theoretical center line of a wave train that these oscillations are a part of. This procedure reveals two horizontal but disjoint straight lines to the left and to the right of the super El Nino. They must not be joined by any computer-fitted curve. Between them is a transition zone that lifts global temperature by a third of a degree. The warming itself is due to the huge amount of warm water the super El Nino brought across the ocean. This, and not some greenhouse effect, is responsible for the very warm first

    decade of our century. A third of a degree warming is half of what is allotted to the entire twentieth century. This is the only real warming within the last 31 years. Checking pre-satellite era temperatures we find that the first half of twentieth century warming took place between 1910 and the start of the Second World War.There was no warming from the end of that period until 1998, a stretch of more than fifty years, while carbon dioxide kept increasing. Anyone wishing to claim the existence of the greenhouse effect must explain the absence of warming during this fifty year stretch. Beyond that there is more trouble for global warming in the Arctic. It turns out that Arctic warming also is not caused by the greenhouse effect. That is because it had a sudden start at the beginning of the twentieth century but there was no concurrent increase of carbon dioxide in the air. The real cause of Arctic warming is a rearrangement of the North Atlantic current system at the turn of the century that brought warm currents like the Gulf Stream into the Arctic Ocean. Arctic warming has been a showcase of the existence of global warming. With the greenhouse effect eliminated as its cause none of these observations count as proving that anthropogenic global warming is real.

    [Response: Ordinarily I simply delete comments from the deluded. But yours is so deluded, it’s actually entertaining. Apparently you can make this stuff up.]