People of the Land

Climate deniers hate the surface temperature data sets, but they love to insult ’em. That’s because they show how much the globe has warmed … and that’s something deniers don’t want to admit, not even to themselves. They live in denial of it. They’ll do almost anything to minimize and/or discredit it.

Their favorite argument is to say that all adjustments made to surface temperatures from land-based thermometers are bad and wrong, and they usually throw in a thinly-veiled implication or outright accusation that the scientists who do that are perpetrating a fraud. Never mind that the whole purpose of adjustments is to improve things, that it’s a time-tested and proven procedure in many sciences, or that for most organizations the entire process is transparent (NASA, e.g., makes all the original data, the methodology, even the computer programs they use to do so available online for all to see).

Their 2nd-favorite argument is to lodge similar complaints about the sea surface temperature data. That’s because deniers are not “skeptics” — they assume the answer up front, and their assumption is: if it shows much warming, it must be discredited.

But there’s one surface temperature data set they have a very hard time accusing: the Berkeley Earth Surface Temperature data. The effort was organized by Richard Muller, a Berkeley physicist who had heard the arguments and accusations and was himself skeptical. So, he decided to go back to the original data and process it by the most advanced methodology possible, with complete transparency, and no allowance for procedures that even might be biased in favor of (or against) warming.

When the project was announced, climate deniers rejoiced. At last! They would be vindicated when a true skeptic did it right. The climate scientists who had been making global temperature estimates — the ones showing all that global warming — would be exposed as the incompetent boobs or outright frauds they were! This was, of course, in anticipation of exposing global temperature estimates as wrong wrong wrong, and based on Richard Muller’s public statements of mistrusting the existing ones. Anthony Watts, wanting to declare his impartial objectivity, went so far as to proclaim that he would accept the result as reality, no matter how it turned out.

The admiration of climate deniers for the Berkeley Earth Surface Temperature project didn’t last long. It vanished into thin air as soon as the results were announced. That’s when the climate denier community turned on Richard Muller like a pack of wolves, because the result was: all those other guys (NASA, NOAA, HadCRU) had got it right all along.

Here’s what Richard Muller had to say in a 2012 op-ed in the New York Times:

CALL me a converted skeptic. Three years ago I identified problems in previous climate studies that, in my mind, threw doubt on the very existence of global warming. Last year, following an intensive research effort involving a dozen scientists, I concluded that global warming was real and that the prior estimates of the rate of warming were correct. I’m now going a step further: Humans are almost entirely the cause.

The opening sentence reveals the difference between a skeptic like Richard Muller, and a fake skeptic like the climate deniers. A skeptic can be converted — a denier cannot.

Berkeley Earth actually produces two principal data sets. One includes sea surface temperature data to make a truly global estimate. Of course, climate deniers won’t trust that one because somebody else produced the sea surface temperature data. But the other, the original Berkeley Earth data, estimates global land-only temperature using data from land stations. It’s a global land-area temperature estimate you can’t accuse of “fraudulent adjustments.”

So: what does the Berkeley Earth Surface Temperature project show for land-only regions since, say, 1850 (when other records like HadCRU from the Hadley Centre/Climate Research Unit in the U.K.) begin? This:


One thing that’s important to take note of is that the world’s land areas are warming faster than the oceans. While the land+ocean temperature has increased by about a net 1°C since 1850, the land area has warmed much more, by about 1.9°C. Another important result is that the global land-area temperature is warming at a rate of 2.7 +/- 0.4 deg.C/century.

The fact that just might irritate deniers more than any other is that the Berkeley data, the no-way-is-it-fraudulent data, show no evidence of anything even remotely like a “pause” or “hiatus” or “slowdown” in temperature since 1970. No pause. Not.

Global land-area temperature is important for another reason: it’s where people live. But I expect we’ll continue to hear insults and accusations about unreliability and/or fraud in land-area temperature, be it from NASA or NOAA or HadCRU or JMA or Cowtan & Way, or even from the Berkeley Earth Surface Temperature project.

After all, deniers tend to be “people of the land, the common clay of the new west. You know …”

This blog is made possible by readers like you; join others by donating at Peaseblossom’s Closet.

32 responses to “People of the Land

  1. >While the land+ocean temperature has increased by about a net 1°C since 1850, the land area has warmed much more, by about 1.9°C. Another important result is that the global land-area temperature is warming at a rate of 2.7 +/- 0.4 deg.C/century.

    I think this is an important point missed by “lukewarmers” as well. When we here of 2C, most folk sort of dismiss it in their mind but it’s 2C ‘land and oceans’, with a much warmer land. When we here 4C (in the range of RPC8.5 by 2100) we might think, ok a little warmer but what are we talking about over land ? 6-8 C or more ? and extremes of 10 -15C or more ? or so KEvin Anderson tells me. That seems… unlivable for most.

    I guess most folk have decided this is a fair exchange for a western lifestyle with inherent profligate emisisons. I must say I disagree with that (I live a very low emissions lifestyle and vote for The Green) but I have to live with their choice as best I can.

  2. That’s quite a memorable scene from Blazing Saddles:

  3. skeptictmac57

    While I welcomed Muller’s change of opinion ( and found it helpful to show deniers) about surface temps, and his admission that he had been wrong, I still have a problem with his arrogant attitude that this wasn’t a settled question until his BEST group came to the same conclusion that had been held by research groups around the world for a couple of decades.
    This is much like Trump’s attitude that he settled the question about Obama’s birthplace, and should be thanked for his help.
    To my knowledge, Muller never apologized for his wrongheaded attacks of Michael Mann and other prominent scientists in his ‘Climategate’ rants, where he impugned their methods and data, and stopped short of calling them frauds while still insinuating it. If he has done so, I would welcome being corrected.

  4. When a ‘skeptic’ rails against the temp records I point out that every single group or person, be they arch-‘skeptics’ or otherwise, who has taken the raw temperature data and done the hard work of constructing a global temperature record, has corroborated the mainstream results. Skeptics even find higher trends than HadCRUt and NOAA.

    If they don’t like BEST, I point them to Roman M and Jeff Condon at the Air Vent, and usually quote them.

    First the obvious, a skeptic, denialist, anti-science blog published a greater trend than Phil Climategate Jones. What IS up with that?


    Several skeptics will dislike this post. They are wrong, in my humble opinion. While winning the public “policy” battle outright, places pressure for a simple unified message, the data is the data and the math is the math. We”re stuck with it, and this result. In my opinion, it is a better method.

    Critics often shut up if I ask them to cite a similar effort with a markedly different result.

  5. Muller was not really a skeptic either, though, because his opening skepticism was based off nothing other than “Where there’s smoke, there’s fire”. He thought it wrong because he didn’t like Michael Mann’s work (or him) much. He thought it wrong because so many people he knew as intelligent said it was wrong.

    What he was was honest enough to change his mind about the validity of AGW, though he still doesn’t like Mann much and makes no bones about it.

  6. Deniers always complain.
    If you use all data they say that you include bad data.
    If you use a subroutine that removes bad data, they say that you cherry-pick.
    They say that GISS adjusts/tamper with data. Not true, GISS only apply an UHI-adjustment.
    They say that CRU adjusts/tamper with data. Not true, CRU use data as it comes, reported from national weather services. If there is a conspiracy, all national weather services must be involved.

    Judith Curry, Nic Lewis,Spencer & Christy, the Weatherbell folks, Lennart Bengtsson, and soulmates assert that the reanalyses are superior to observational datasets, when it comes to surface temperatures.
    I wonder if they like this chart?

    People of the land, the common clay, will likely just dismiss it with a “reanalyses are just models” or something..

    • “Judith Curry, Nic Lewis,Spencer & Christy, the Weatherbell folks, Lennart Bengtsson, and soulmates…”

      I don’t think Lennart Bengtsson is a “soulmate” of Spencer & Curry and I guess his response to that chart would be neutral but interested.

    • If there is a conspiracy, all national weather services must be involved.

      Exactly. Just have a look at the temperature increase for the individual countries based on national datasets.

      MarkR, maybe you should learn Swedish.

      [Response: That’s a post well worth study, and reference to.]

      • Victor,

        I’d love to learn Swedish, is he saying much different stuff than he did in English? I have a lot of time for Lennart. It’s possible he changed completely since we shared a department, and I think he made a mistake with the GWPF. But it’s understandable how they sell themselves as legitimate, honest actors to someone people who aren’t paying detailed attention to all the little climate blogs.

      • Yes, he used to be an excellent researcher.

        At least the bunnies say he speaks differently in Swedish. I also do not speak Swedish. Google translate is pretty good though.

      • MarkR, I don’t speak Swedish, but know enough Danish to be able to get by when reading Swedish, without having to use translation services.

        I think it should give you pause to consider that Bengtsson has attached himself to a group in Sweden (The Stockholm Initiative) that now uses the website name “klimatupplysningen” (somewhat freely translated as climate information), but which started as “theclimatescam”. He’s not nearly as radical as some of the others there, who still repeatedly claim it is all a scam.

      • The Swedish GWPF without aristocrats.

  7. Somewhere Muller said Michael Crichton made valid points. Just asinine.

  8. “I identified problems in previous climate studies that, in my mind, threw doubt on the very existence of global warming.”

    It’s all well and good that Muller and his group at Berkeley subsequently confirmed what other researchers had long shown, but how on earth was a physicist thinking the above in 2009? An experienced Ph.D. physicist should be able to understand the greenhouse effect, solar variability, sea level rise, loss of ice, and rise in ocean heat content and put two and two together. All of this information was clear seven years ago, and Muller never had the excuse available to many fake skeptics, namely that that the basic science was beyond him.

  9. Too many shallow criticisms of Richard Muller here. If we insult somebody who looks at the evidence and changes their mind, what is the point of using evidence to convince anybody else to change their mind.

    • Well, Paul, the criticisms of Muller, are not at all shallow if you know his ‘skeptical’ history.
      And the point of using evidence to convince anybody to change their mind is that they were wrong, and you want to correct (in Muller’s case) the damaging misinformation and borderline slander that they have been publicly pushing on an impressionable public who would expect correct information from a learned physics professor, and not a superficial diatribe from someone who has not fully immersed himself in the science of AGW before attempting to debunk it with ad hominem lectures.

    • But that’s not quite what happened, Paul. It wasn’t the evidence that convinced him, Muller outright rejected the evidence that was already available in the scientific literature and said so. What Muller and BEST did was take the raw data, process and analyze it themselves, only to discover that everyone in the field was in fact already doing it correctly and the evidence in the literature was factual and correct. He wasn’t convinced by the evidence until he had replicated it. That’s the way it should work, of course, but we’re talking about established science here, not a new discovery or hypotheses. He didn’t confirm anything that hadn’t already been confirmed by multiple researchers. And in the end the “skeptic” side rejected his effort anyway.

      • “…only to discover that everyone in the field was in fact already doing it correctly”

        No, they were — and most still are — ‘doing it’ using crude grid-based spatial averaging schemes. I’m attracted to the quick and dirty, but when much hangs on the result* there are better methods … which BEST and Cowtan and Way have now applied. That they make only slight differences in this case is hardly the point.

        * For example, when colleagues assess the ‘spatial average’ (hence total mineral content) of a billion dollar orebody, they’d risk being sued for negligence if they just used a simple average of the data points in each grid box (ore block). The industry abandoned that decades ago, largely due to Danie Krige.

      • gerg, that IS doing it correctly. You don’t use a micrometer to measure the trip across a city, even if that would be more accurate than your car’s odometer.

    • If Muller had expressed, even forcefully, a lack of confidence in the global surface temperature record and its homogenization necessitated by changes in observers, instrumentation, enclosures, times of observation, near-site and regional heat island effects, etc., or if he had expressed doubts regarding the blending of sea surface temperatures with 2 m air temperatures on land, or wondered whether annual temperature anomalies actually are spatially coherent over distances exceeding 1000 km, then his skepticism and new approach might well have been welcomed.

      But to question global warming because of, among other things, his second-hand interpretation of the hacked and selectively leaked CRU emails? The most charitable interpretation is that it was a deeply silly thing to do, and reflected an arrogance that still shone through in his 2012 New York Times op-ed. Perhaps it’s an occupational hazard:

    • Muller’s re-analysis of the land surface temperature record may have been motivated by arrogance and personal animus, but IMO he gave a big boost to the public credibility of climate science.

      As Tamino points out, he established his bona fides with the pseudo-skeptical community by taking their complaints about the thermometer record seriously, then exposed their hypocrisy when his results confirmed the reality of global warming. We might roll our eyes and say “well, duh!”, but we still encounter knee-jerk AGW deniers in public fora who claim the temperature record is unreliable. It’s gratifying to remind them, with links, how arch-denier Anthony Watts pledged to accept Muller’s results, then reneged when they didn’t turn out the way he expected.

      For the un-committed public, who may assume like Muller did that smoke about the temperature record means fire, his high-profile “no longer skeptical of AGW” op-eds in outlets like the New York Times and the Wall Street Journal ought to carry disproportionate weight.

      I think Muller did us a huge favor, for which we should thank him.

      • Thank him for doing work that wasn’t necessary and only done because he wanted to be heard and didn’t care to check? Why?

        Newton is still castigated for being a bit of an a-hole, and he’s done a lot more for science than Muller.

        Accept he’s changed his mind about the temperature trend? Sure. Doesn’t mean we can’t point to continued bile against some prominent scientists, or recall what it was motivated him to do all this work: he just didn’t like the work or the emails, discarded without consideration and misinterpreted out of pique.

        But he did change his mind on the validity of the temperature record.

  10. PaulButler999,

    Thank you for the most useful comment of the lot.

  11. Tamino — First, let me say that I have learned gobs from your posts. Second, let me say, I admire your indefatigable debunking of denialism.

    Have I ingratiated myself enough? I hope so because I have a request that has nothing to do with denialism or statistics for the physical sciences.

    I live in the world of retail and I have question about about the statistics of customer satisfaction surveys.

    I’m sure you have seen such surveys. In one way or another they ask, “On a scale of 1 to 5 rate your satisfaction on X, Y or Z.”

    We are given, every week, a rating of some percent; but that percent is not simply an average of the customer ratings. Rather it is a percent of the customer ratings that equal or exceed a certain threshold. That is: suppose five responses (on a scale of 1 to 5 with a threshold of 5): 1, 5, 5, 4, 3. The average response is 3.6, but the percent meeting the threshold is 40%.

    What kind of statistic is this? How does one take account of the self-selection of such a survey?

    If you don’t want to get into all of this, can you point me to a place that can help me get a handle on it?

    Thanks in advance.

    [Response: The problem with taking a numerical average (to get 3.6) is that the data aren’t really numerical. They’re what’s called *ordinal* — it’s really just five categories (1, 2, 3, 4, and 5) but they do have an *order* (5 is bigger than 4 is bigger than 3 etc.) so it’s not simply categorical data. BUT (big but) the difference between 1 and 2 isn’t the same as that between 2 and 3, or 3 and 4, etc. Hence a “1 and a 3” isn’t really the same as a “2 and a 2”. Maybe it’s better to think of it as “E, A, A, B, C” rather than ” 1, 5, 5, 4, 3.”

    By taking the fraction above a given threshold you have a binomial statistic, which is fine to apply to categorical data. Its problem is that it loses information, taking no account of the fact that “1” and “2” and “3” and “4” aren’t the same. Categorical data with multiple categories can be treated as a “multinomial” experiment, but that ignores that the categories are ordered.

    It can also account for the fact that research shows some responses can be considered quite similar. There is, for example, something that has been called the “ultimate question” which is a “scale of 1 to 10” query, for which it’s been shown that 8, 9, and 10 are all generally good (and are called “promoters”) while 6 and 7 are “meh” with 5 and below being just not good enough. The most useful result is something like the “net promoter score.”

    In order to account for confounding variables, one would typically apply what’s called “ordinal regression.” It makes no assumptions about *how much* bigger 2 is than 1, or 3 is than 2, etc.

    To account for selection bias, I’m not sure by I suspect you’d want at least *some* data about the people who don’t participate in the survey. This can be as simple as how many customers you had. For instance: suppose two weeks go by during which you get a dozen surveys and they’re all rated “5.” But the first week is a dozen returned surveys out of 20 customers, the second week it’s out of 200 customers. The first week represents a much larger fraction of customers sufficiently motivated to return the survey (and rank it high), so could be viewed as a better customer endorsement despite having an identical fraction over threshold. After all, there really are six categories: 1, 2, 3, 4, 5, and “didn’t return a survey.” The “didn’t-return-a-survey” category doesn’t fit in the ordinal scheme.

    There’s also the fact that most surveys include a “comment” option. This is often (maybe even usually) ignored, but I think it’s the most fertile part of the survey. The problem is that there gets to be a large volume to read and the data are what’s called “unstructured”, but there are ways (e.g. automated text categorization) to get terrific info from text-based feedback. If customers do it online, the added bonus is they’re doing the data entry for you.

    In all, I think there’s a tremendous wealth of information to be had from such surveys. A simple “40% above threshold” barely scratches the surface. But it is at least genuinely binomial, and doesn’t rely on a numerical average of non-numerical data.]

  12. Weekly CO2

    October 16 – 22, 2016 401.65 ppm
    October 16 – 22, 2015 398.49 ppm increase of 3.16 ppm in noisy number

    Daily CO2
    October 24, 2016: 402.27 ppm
    October 24, 2015: 398.65 ppm 3.62 ppm increase in really noisy number

    This is the ballgame. As long as this number continues to rise, we are in trouble and every ppm increase is deeper trouble. It goes up easy, it comes down hard when considered against the activities of our species.
    Warm regards,


  13. The deniers liked the surface temperature data sets well enough when they thought they showed global warming had come to a halt. Now that it’s obvious even to them that global warming continues apace, they’re back to slinging mud at the data.

    They’re ideologues. It’s extremely difficult to change an ideologue’s mind.

  14. Yep, I think Muller has done a fine job re-doing the temp work from first principles and accepting the results. And he’s written articles about it and appeared in the EdX @Denial101 course so people get to hear about it. That’s all good, and for people who _still_ don’t believe the numbers, well, there no helping them really.

    However, I do note that he is active on Quora, answering climate questions, and he’s still pretty-much a ‘lukewarmer’ in most of those. As in (I paraphrase) “yes, there is a temp rise, and yes it’s humans, but all this stuff about catastrophe, that’s OTT – we’ll be OK with a few tweaks”. Now that surprises me – he’s clearly smart, _and_ capable of taking in new info, and these days that seems to me to be be a rather high-risk/unhelpful standpoint. He does at least answer questions in a way that skeptics/deniers/many-Americans will read, and mostly gives them science, so maybe he’s doing more good than harm overall – it’s hard to tell.

  15. I know that I’ve posted this link previously here, but now is a good time to post it again:

    The post at the above link shows that even “simple-minded” processing of the unadjusted aka raw GHCN land temperature data reproduces the official NASA land temperature results very closely.

    I implemented a very simple anomaly gridding/averaging algorithm using large 20deg x 20deg grid-cells (at the Equator — longitude grid-cell dimensions were adjusted to keep the grid-cell areas nearly equal as one goes north/south to the poles). No fancy interpolation stuff — just straightforward averaging of really big grid-cells.

    Yet I still got results pretty close to NASA’s.

    When crunching the GHCN data, I noticed that hundreds of “airport” stations had data records going back prior to the Wright Brothers first flight. It is quite obvious that those “airport” stations, established before there was any such thing as an airport, must have been moved at some point in their history. (And it does turn out that many of those “airport” stations were originally located in city centers.)

    So I re-did the processing with all the “airport” stations excluded. And what happened? My results matched NASA’s even more closely!

    What I did tells a very simple story that even non-technical folks can readily understand (this makes it more effective against denier attempts to sow confusion).

    Key take-home points:

    1) The global warming signal is so strong that even the simplest processing of raw temperature data brings it out. The warming signal basically “jumps right out at you”. Raw data results will closely align with adjusted-data results right off the get-go: no fancy data-adjustments are required like they are with satellite data.

    2) Much of the modest difference between the raw and adjusted data results can be attributed to corrections for station moves (corrections present in the adjusted, but not the raw, data). People may not understand TOB (Time of Observation Bias), but they do understand station moves.

    3) A simple “first cut” look at the impact of station moves can be performed simply by excluding “airport” stations. It’s definitely a flawed analysis of the effects of station-moves (simply excluding all “airport” stations doesn’t catch stations that were moved to non-airport locations and also excludes many stations that weren’t moved), but it’s an easy “first cut” analysis step. (The GHCN metadata includes an “airport” flag that makes excluding airport stations really easy). And it turns out that the simple brute-force exclusion of all “airport” stations visibly reduces the bias between raw data results and the official NASA adjusted data results.

    (OK, there’s no “edit” feature here — hopefully, there aren’t too many stupid thinkos/typos in this post!)

  16. Here’s my visual comment on the situation 2 days before the US election: a faint sun, struggling to break through wildfire smoke–generated in part, we may plausibly guess, by our fossil fuel-sourced carbon emissions, as manifested in a record-warm season near Franklin, NC–and a chief modality by which we just keep on adding to said emissions.

    A vote for Trump is a vote to just keep the smoke coming.