Anthony Watts noticed that the NOAA/NCDC global temperature for May 2013 was touted as the 3rd-hottest May on record. He didn’t like that, so he decided to call it into question. In fact he refers to the NCDC reported value as “irreconcilable.”
He compares the NCDC result to the UAH TLT (lower troposphere) temperature, mentioning that the UAH value is only 0.07 while the NCDC value is 0.66 — big difference! But this quite ignores the fact that the UAH TLT data and the NCDC data use a different baseline, i.e. a different zero point. You shouldn’t compare them without accounting for that. By the way, the UAH people may have revised their May 2013 figure ever so slightly, since the data now lists 0.08. No big deal really.
He also compares to the RSS TTT data set. It’s more customary to compare surface temperature to their “TLT” data set (lower troposphere) rather than “TTT” (total troposphere), since TLT is, well, closer to the surface. No big deal really, except that Watts seems unaware of what he’s doing. And of course, once again the data sets use a different baseline which must be accounted for when comparing.
He also compares the NCDC figure to the “WeatherBell 2 meter global temperature reanalysis,” but this is quite odd because not only is there a different baseline which Watts doesn’t account for, the graph he shows and the figure he quotes seem not to be a monthly average for May — they look like figures for 6:00 P.M. Greenwich Mean Time on May 31st. You really can’t compare a momentary value to a monthly average if you want to give an honest portrayal.
He even compares the NCDC figure to the report from NASA GISS, having this to say:
One thing is clear, since GISS almost always reads higher than other datasets, including NOAA, and in this case NCDC’s claim is higher than any comparable dataset, it doesn’t seem believable. Perhaps a correction will be forthcoming.
Besides not accounting for baselines differences and trying to compare a momentary value to a monthly average, Watts seems not to have investigated at all what kind of differences are typical for these data sets. Even if NCDC is higher than others this month, is the difference really notable or is it typical of differences that have occurred often? I suspect that Watts simply wanted to repudiate the NCDC report, and in his eagerness he jumped the gun.
It’s easy to check whether the latest NCDC figure is “out of whack” with other data. Here’s the difference between the NCDC data and the UAH TLT data for each month since 2000:
The May 2013 difference is on the high side, but so are a lot of other months. Certainly this month’s difference is not “irreconcilable.” And by the way, the average difference is 0.413 deg.C, so you really can’t ignore that baseline difference if you want to give an honest portrayal.
How about RSS? Let’s look at the difference between NCDC and RSS TLT:
Again, the May 2013 difference is on the high side but so are a lot of other months. Certainly this month’s difference is not “irreconcilable.” And by the way, the average difference is 0.341 deg.C, so you really can’t ignore that baseline difference if you want to give an honest portrayal.
What about the RSS TTT data?
Yet again, the May 2013 difference is on the high side but so are a lot of other months. Certainly this month’s difference is not “irreconcilable.” And by the way, the average difference is 0.365 deg.C, so you really can’t ignore that baseline difference if you want to give an honest portrayal.
As for GISS, here you go:
One more time: the May 2013 difference is on the high side, but so are a lot of other months. Certainly this month’s difference is not “irreconcilable.”
And by the way, the average difference is only 0.006 deg.C, but on average NCDC is higher than GISS, not the other way around as Watts claimed.
Perhaps a correction will be forthcoming.
Perhaps a correction will be forthcoming
I love good satire.
Pigs will fly.
hey it worked for Duke Nukem Forever !
… wait, we saw the result.
Wouldn’t it be nice.
I’d rather be outside his head looking in than inside it looking out.
Correction? From Tony?
Since it seems Watts, quite some time ago, has become almost religious in his anti-AGW stance. It seems to be the result of the Monckton vs Hadfield debate where WUWT’s favorite strident British buffoon abruptly decided that Obama’s birth certificate chase tooks precedence and the Muller BEST results that weren’t to his expectations.
These are a few of my favorite things… ;)
Well, since you asked for it.
“My Favorite Things”
– Horatio Algeranon’s rendition of the Rodgers and Hammerstein song (from “The Sound of Music)
Curry and Roses and high diving horses
Mything with Moncktons, divining with dowsers
Skeptics with foolishness hung with their strings
These are a few of my favorite things
“Blog Science” phonies and short trend balonies
Ding dongs and ding bats and blogging at Tony’s
Theories that fly with the moon on their wings
These are a few of my favorite things
Graphs from fake skeptics with BS statistics
Fact-fakes that stray like erratic ballistics
Sea-ice “recoveries” that melt into springs
These are a few of my favorite things
When the blog bites
When “tee hee” stings
When I’m feeling sad
I simply remember my favorite things
And then I don’t feel so bad
If anyone wonders about the reference to “diving horses” above, read “Morner-ing has broken.” Morner is also apparently a fan of “dowsing” (also known as “witching”), not incidentally, which may partly explain his fascination with horse-diving.
And if anyone wonders about the “Curry and Roses” part, read “The Rose
If anyone wonders about any of the rest, it means they have been living under a blog for the last few years.
Bravo, Horatio. Just the chuckle I needed to start my day.
Ah! You’ve Trapped it perfectly!
Uh, MorinMoss, it’s *always* been like that with Watts.
Meanwhile, the baseline ignorance is as precious and hilarious as ever.
Ha ha. I notice that Watts still ignores baseline differences. He’s been acting more of a goose lately than normal.
In fact all the deniers have shown signs lately that they are going off the rails, with Curry and her “cooling” trend, and her perennial fantasy that scientists don’t do “uncertainty”; and Watts and his insects (of the Voisin type) and bats (of the rg kind) etc.
Watts has been told about the need to correct for different base lines how many times? Well, maybe we will finally see the third(?) part of his infamous series.
“No big deal really, except that Watts seems unaware of what he’s doing.”
Nothing new there. Yes, I agree that Watts has become religious in his stance. I doubt he could ever give it up.
So, if I get it right, Mr Watts *once again* does not understand what a baseline is …
Given the numerous attempts to explain him this point, we can now safely assume he is truly irreconcilable with simple data handling.
It’s not like he has to understand anything to meet the needs of his audience…
A comparison of the full GISS & NCDC record shows the unexceptional nature of the May 2013 figures even more starkly, as this linked graphic demonstrates. Usually two clicks ‘to download your attachment’ To assist the eye, May 2013 (weighing in at 0.1003ºC) has been marked black.
Watts knows about the different baselines. He learned it the hard and embarrassing way.
The satellite measurements of the lower troposphere seem to have more pronounced spikes and dips than the surface measruements, so one can expect to find deviations between the two types of series (apart from the different baselines).
Did he learn it, or did he forget it since he “learned” it over five years ago?
Ahhh, yes. The baseline analysis. If you look at parts 1 and 2 of that ‘brilliant’ 3-part Watts analysis, you’ll notice that there are no comments by anyone named Lee. That was me. Was.
When I publicly asked Tony a couple weeks later when we could expect part 3 of that series, the promised post where he would make it clear why baselines mattered – ir didn’t matter, or something – he banned me permanently from WUWT, removed every comment I had made in those two threads, and then went back and removed every (previously moderated and approved) comment I had EVER made at WUWT.
It’s a good thing he doesn’t censor opposing viewpoints.
Or make schoolboy ‘howlers.’
Actually, if you look at part 2 all you get is:
Apologies, but the page you requested could not be found. Perhaps searching will help.
Part 2 is still available on archive.org. The article was written on 6th March 2008, first captured on 10th of March and the latest good copy is from 27th July 2008. The next trawl on 4th May 2009 returned a HTTP 302 status (“Moved Temporarily”), which points to the current 404-page.
Thank you Blog Historians.
Hmm…Repeating the same mistakes, covering up evidence of it and expecting different results…
Ignoring different baselines for the temperature readings is as dishonest as claiming 0 degrees centigrade is a lower temperature than 32 degrees Fahrenheit. Watts is NOT HONEST. He knows better. This has been pointed out to him before. The man is performing a POLITICAL job, since he is a paid officer of the Republican Party in his county. Stop expecting science from him. That’s not his job.
He didn’t take *baseline* into account? Oy!
Well,Watts really isn’t a climate scientist,so mistakes are expected…right?
I’m sure Spencer or Christy will set him straight right away.
As I said yesterday over at the rabbit warren:
“Once again, Anthony Watts has forgotten how to account for anomalies and baselines:
Sez NOAA, as quoted by Watts: “0.66°C (1.19°F) above the 20th century average”
Sez Watts: “Even NASA GISS is lower according to their May monthly combined global data which comes in at +0.56°C compared to NCDC’s claimed value of 0.66°C”
Sez the linked GISS data page: “base period: 1951-1980″
Sez a quick calculation with Excel: the difference between the GISS 1951-1980 period and the 1901-2000 period is 0.05 degrees C.
Whee! In one step, I have reconciled over half of the irreconcilable difference!!!
Tamino, I think you did call Watts out legitimately on most of the wrong things he did in that post. But if you look closely at the “WeatherBell 2 meter global temperature reanalysis” diagramme in his post, it does appear to be a monthly average from 1st through 31st of May (not just the reading on 31st May as you stated)… albeit using a 1981 – 2010 baseline. But at least he got part of it right. Give credit where credit is due :-)
The proclamation made by the Lord of Wattsupia about that weatherbell map & anomaly is rather odd.
He has appended a note under the map in question to point out that it does truly represent “the entire month of May” and not “…a single day as some suggest.” To me, this appended note is highly indicative of somebody who actually does ‘care a whit’ and probably more. Yet his Lordship says otherwise.
And there’s more. His Highness also replies to the commenter who (crediting this Open Mind post for noting NCDC is often higher than GISS) pointed out that the map in question was for a single day, His Eminence’s response being as follows:-
“I care not a whit what “Tamino” says, he’s too angry to bother reading. Ryan sent me that in email, representing it as monthly, I didn’t notice that is was a single day, but I think you may be making a mistake and conflating May31st as a day and month end run on that date. Note the title gives a RANGE from 00ZMay 1 to 18ZMay 31.(My emphasis.)
So why write the emboldened bit? It is surely nonsense, unless … unless it is a ‘pre/post cover-up’ comment that got its editing screwed up.
“He also compares the NCDC figure to the “WeatherBell 2 meter global temperature reanalysis,” but this is quite odd because not only is there a different baseline which Watts doesn’t account for, the graph he shows and the figure he quotes seem not to be a monthly average for May — they look like figures for 6:00 P.M. Greenwich Mean Time on May 31st. You really can’t compare a momentary value to a monthly average if you want to give an honest portrayal.”
That graphic on their page really does seem to reflect the entire month. At least when I looked at it this morning, it showed a range of dates at the top.
[Response: Evidently I was mistaken.]
Those Ryan Maue Weatherbell graphics get thrown around the internet a lot, in fact I’ve seen them pop up several times on Dr Jeff Masters’ blog. One blogger even suggested that he figured out an “adjustment” that you could apply to the reanalysis data from Weatherbell to make a “forecast” for what the other datasets would end up with. That blogger had to be reminded that it really wasn’t a forecast at all, it was taking into account the global warming that occurred between the two baselines.
“Micro” Watts is a layman–and what is more a layman with only a high school education. He can be forgiven for an occasional scientific blunder. However, making the same blunder over and over and over again, no matter how often it is pointed out to him just makes me wonder whether his learning curve has a positive slope.
Aunt Judy, on the other hand, used to be a scientist. She either does or ought to know better. She apparently loves the fawning of idiots more than the respect of scientists.
I see Watts is blaming GISS for his incompetence, once again falling back on the fact that their baseline is improper (it’s not), instead of blaming himself for doing the analysis properly…my comment in case it gets deleted:
“As for baselining, in a perfect world, GISS would give up their antiquated baseline, and we’d have all datasets using the same baseline. Also in a perfect world, we’d be seeing absolute temperature data plotted in parallel with anomalies”
In a perfect world, the U.S. would go metric…
It naturally follows that 0 C is colder than 10 F, right?
That’s actually pretty common. After all of the errors in his 2009 paper were pointed out, Lindzen blamed the CERES/ERBE data, referring to it as “grotesque”. Someone should remind him that a craftsman never blames his tools.
Hey, I think I understand the problem Tony “Micro” Watts has with baselines–if you don’t believe the temperature is changing, then baseline shouldn’t matter!
Watts has made a career out of denial of base-line reality.
So he puts the ‘base’ in base-line?
Well, this is going to be fun with them baselines then. NSIDC will change its baseline in July: from 1979-2000 to 1981-2010. Expect huge cries of “recovery”!
The other very interesting thing of not besides the fact that Anthony Watts is making the basic mistake of not adjusting baselines…. he’s trying to compare satellite anomalies directly to land-based thermometer anomalies, and completely ignoring the fact that they act differently during different ENSO states. Of course actual inquisitive minds and scientists know that is why over a shorter period we have seen less warming evident in the satellite-derived data than in the land-based data because they typically pick trends from El Nino to La Nina, with both amplified (which is typically for those datasets). Meanwhile, over the longer term, the trends are basically the same.
I doubt Watts is concerned with being correct. Essentially, he is throwing red meat to his audience, playing to their prejudices, creating a target for derision. He knows that they and the vast majority of those who will hear of his attacks on NOAA/NCDC secondhand won’t bother to look things up and figure out that his attacks are unfounded. Standard Operating Procedure for him and many like him, whatever the media.
I do not mind a little bit bashing of Anthony Watts and his denial cult blog, after all, I have some motive for it too, however, since we are at temperature anomalies, I would like to address some serious question. So unlike in the GWPF thread, this time I am not facetious.
Has anyone else noticed that the trend estimates in the data sets for the globally averaged surface temperature anomaly since 1998 have become statistically significantly different from the multi-decadal trend estimates since the mid 1970s? Not yet at the 95% level yet, but almost, except for the BEST land only data. This is different to the beginning of year 2012. Both the trend estimate and the 2 sigma since 1998 have come down since then.
I use the Skeptical Science tool (http://www.skepticalscience.com/trend.php), which is based on the methodology according to the Foster and Rahmstorf, IOPS (2011) paper (http://dx.doi.org/10.1088/1748-9326/6/4/044022). For autocorrelation I use 1975 to present (What is the correct choice of the autocorrelation period?)
These are the trends and 2 sigmas in K/decade for the two time periods. The statistical Null-hypothesis is that there has been no change of the trends since 1975.
1975-present 1998-present Significance level
GISTEMP: 0.169+/-0.039 0.057+/-0.14 >85%
NOAA: 0.159+/-0.036 0.033+/-0.131 >90%
HadCRUT4: 0.169+/-0.041 0.038+/-0.149 >90%
For land only data:
BEST: 0.264+/-0.063 0.144+/-0.26 90%
This may be all just a statistical artifact due to the cherry picked start year of 1998 and the very strong El Nino in that year, the prevalence of La Ninas in recent years, and the deeper and prolonged solar minimum. The statistical significance may go away again with the adding of the data from coming years. Even a 95% significance level still allows for one false rejection of the Null-hypothesis out of 20. However, with these numbers I do not feel comfortable to reject that some empirical, statistical evidence for a significant trend change toward a lower warming trend in recent years has emerged. Those numbers actually indicate a quite high probability for such a change.
It is not inconsistent with the somewhat smaller increase in the ocean heat content (OHC) in the upper 700 m over the last decade compared to the decades before. http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/
It may be worth to check whether the smaller slope of the 700 m OHC in recent years is statistically significantly different from the slope in the decades before.
Any opinions on those numbers and what conclusions can be drawn from it?
[Response: It’s easy to underestimate the potency of cherry-picking a start year for changing a trend estimate. You’d need a *much* higher significance level to declare a genuine change when you pick the start year for that purpose. For an idea of just how much, see this.]
Thanks, Tamino. That’s interesting.
Do I understand this correctly, how you derived the higher t-value? The higher threshold t-value you got, which is necessary to be exceeded to claim statistical significance, when you deliberately cherry-pick the starting point, is the one where 95% of t-values from the distribution of maximum t-values obtained from the 1000 white-noise simulations are smaller and equal, and only 5% are larger?
So, the cherry-picking of the start year isn’t only a choice to make claims without bothering about statistical significance, it actually really distorts the results from the statistical significance test, giving you a false answer. How come? Is this due to a decrease in the degrees of freedom, when you cherry pick? There must be some mathematical formula for that.
In reality, when you deal with measured data from Nature, which you can’t just rerun a 1000 times, and where you typically deal with limited data, this really poses a problem for the fidelity of statistical significance tests, doesn’t it? I guess you really have to pay attention also to check for the robustness of the results, to be sure they are not distorted by outliers. Oh, that’s so depressing. But on a positive note, global warming is still with us. ;-)
Jan, the choice of start years should make a big difference, as does the length of the period. Compare your results with starting at 1994 (half the time since 1975), 1997, and 1999. That should give a much better sense of the effect of choosing a specific starting point
Just to make sure: did you use the Foster & Rahmstorf -adjusted data option?
I don’t manage to reproduce your numbers. What were your settings?
BTW I think the principle Tamino refers to is the Bonferroni correction.
No, I didn’t use the adjusted data option. I used this calculator:
Start date for longer trend: 1975
Start date for shorter trend: 1998
No End date
Under advanced options:
Start for autocorrelation period: 1975, no End date
I just double checked. I get the same trend estimates and 2 sigmas from the previous post for the global surface temperature data. Something must have gone wrong with the land only data, though, perhaps when I copied from the editor. This is what I had originally:
For land only data:
BEST: 0.264+/-0.063 0.144+/-0.26 90%
Oh, I see what happened with the land only data. The mathematical symbol for “smaller” was interpreted as html-code. I forgot about that.
OK. Here again:
For land only data:
BEST: 0.264+/-0.063 0.144+/-0.26 smaller 1 sigma
NOAA: 0.275+/-0.052 0.101+/-0.205 larger 90%
Ah, I see what happened. You used the most recent data, I used the data used in Foster and Rahmstorf”.
I suggest you try this too, both “raw” and “adjusted”. See how going to “adjusted”, for 1998-present
1) brings down the uncertainty (you did notice how uncertain those 1998-present trends are didn’t you?), and
2) raises the trend back to (about) the longer-term average.
This should help you understand where those low recent trends come from: natural variability. Take it out (mostly; that’s what the F&R adjustment does) and it goes away.
…and yes, I did now replicate your numbers.
So, the long and the short of Foster & Rahmstorf is, that the recent so-called pause in global warming is actually a prolonged period of negative-dominant ENSO (aka “La Niña”). I seem to remember a paper by Mann et al. showing that something similar happened back in the Middle Ages, but I don’t find it now. I did find this.
One could speculate of course whether this switch to a persistent ENSO state has anything to do with the human-caused climate disturbance. Without further evidence, this is empty speculation as it appears not to be statistically significant (yet?)
Besides the unforced El Nino/La Nina variability, the effect of the change in solar activity from the maximum to the minimum in a cycle isn’t negligible either, regarding the energy balance of the planet. The difference is about 0.25 W/m^2 between the maximum of the last cycle and following minimum. This is about equivalent to the additional forcing from CO2-increase over 10 to 15 years. So, when the solar activity moved from the maximum around year 2000 to the minimum in 2007/2008, the negative forcing difference fully compensated the increased forcing from CO2 over the same time period, or even more. And it looks like that this solar cycle No. 24 maximum is only about the half the one of the previous one.
So, even though the effect of the forcing from anthropogenic greenhouse gases is currently dominating the long-term changes of the global energy balance, over a time scale of a decade or so, other factors are of equal importance.
I just want to caution. The other trend calculator, the one with the adjusted data option,
doesn’t include the recent years. The data end around the end of 2010.
Jan P Perlwitz.
The normal run of the Solar Cycle does leave its mark on surface temperature record but it is not very noticeable. Just at the point when the rise (or drop) in Total Solar Irradiance begins to produce a noticeable affect within the noisy temperature data, the SC reverses cancelling out its work. Thus it requires statistical analysis to identify its presence.
A reduction in the strength of solar output would only really become significant if it is maintained over more than one cycle.
SC No 24 is short on TSI as well as low on sun spots (See graph here. Usually 2 clicks to ‘download your attachment’) but as with the cycle itself, the effect of that reduction on the temperature record remains within a single cycle and is only noticeable with analysis of the data.
This remains true even if the most significant aspect of the weak SC No 24 is accounted for – its arrival 20 months later than recent SCs (see here).
Above and beyond the normal SC cycling, the net effect of a late, weak SC No 24 so far cancels out the rise in human GHG forcings over the last 3 years. To date, such an effect will be noticeable within the climate data but the impact on temperature would still require analysis. Reduced TSI is certainly not responsible for the noticeable slowing of surface temperature rise over the last few years.
Yes, I understand that the result depends on the length of the period chosen. However, if you don’t get statistical significance for the other start years, i.e., the rejection of the Null-hypothesis fails, it only tells you that the result is not conclusive. It doesn’t falsify the alternative hypothesis. I am more puzzled about the instance with the (false) statistical significance.
I suppose, for a number of years after the multi-decadal warming trend started to emerge in the 1970s, about which we know in hindsight today that it is highly statistically significant, there was a similar situation. Trend estimates were statistically significant or not, depending on what exact start year was chosen for the statistical test. The question is then, what is the number of years required for successfully rejecting the Null-hypothesis to say the result is conclusive with high probability?
Perhaps, it is with the pile of screws. If you put one screw on a table, it’s just a screw lying around. Then you add another screw. It’s two screws lying around. And so on, one screw after another. At the end you have a pile of screws. But at what exact number of screws wasn’t it just a bunch of screws lying next to each anymore, and it became a pile? It’s not possible to say. Even photographing every step won’t help you with finding the answer.
Jan P Perlwitz
I was actually just refreshing my memory on the effects of ‘length of period’ on confidence levels of regressed trends. The bounds reduce linearly with the spread of data (ie start year to end year) while also reducing roughly with the square root of the number of data points (which of course also increases by the no of years sampled). So with the same noise level on the data, you could expect the trend confidence interval 1975-date to be about a third the size of the 1998-date confidence limit – 1/(Period^1.5).
And cherry-picking, a data point lying at the start (or end) of a period under analysis that is sitting out near that 95% confidence limit can result (as 1988 does) in shifting the average regressed trend for the period half way to what would otherwise have been the 95% limit. So when the period is short and thus the confidence interval is big and the trend close to statistical significance (95%), the likes of cherry-picking a 1998 at the start (or end) of an analysis can easily halve (or almost double) the average trend predicted from the regression.
I wonder whether the differences between the different datasets are due to anything systematic, or mostly random?
Is this due to different coverage – so perhaps GISS is colder this month because the Arctic is a bit colder [assuming GISS has better coverage over the Arctic]?
> over a time scale of a decade or so, other factors are of equal importance.
You seem to be arguing that natural variability has as much effect as fossil fuel burning — but fossil fuel CO2 isn’t averaging out to zero; the other factors do, over time.
I argue that other factors can be equally important on a time scale of a decade, at maximum two. In the longer run, with the current rates of increase in anthropogenic greenhouse gases, latter are the dominating factor for the change in the global energy balance over multiple decades and centuries ahead, unless for some reason the aerosol input into the atmosphere strongly increases (e.g., due a series of major volcanic eruptions in the tropics, or even more significant, if a super volcano erupted), which could have quite some significant cooling effect, counteracting the warming at least for some time. So, I don’t really see any contradiction to what you said right now.
A slowing in the tropospheric/surface warming trend is conceivable to me, even with a steady increase in the radiative forcing from greenhouse gases, due to a possible prolonged phase during which the transport of heat into the deeper layers of the oceans is increased, which already may have been the case in recent years.
What is the effect on the pools of ocean water that are used by the various ENSO events? Cold water up welling in the Eastern Pacific. Warm water downwelling in the Western Pacific? Most importantly, what happens during ENSO neutral? Over a few decades, it all gets warmer, right?
JCH, Yesterday’s post @ SkS is very much related to this question (perhaps it was the source of your query).
I agree with what you write with respect to ENSO. However, an increase in the efficiency of the heat advection into the deeper layers of the oceans is a different matter. It has an effect like increasing the heat capacity of the upper layers of the oceans. So, it would actually slow down the warming rates as long as the increased heat advection is present. It takes longer to reach the new equilibrium state for the given radiative perturbation.
if the heat is advected more efficient to deeper ocean layers, wouldn’t that imply that the new equilibrium state is reached faster at the expense of a temporarily (?) reduced transient climate response due to less warming of the upper ocean layer? Not sure, just wondering …
Arch – no, I’ve been looking through Google scholar for months to try and find out ways warming oceans might change ENSO. So I’m off to read it. Thanks.
Jan – I would have to completely disagree: increased advection into the deeper oceans speeds climate heating rates, speeds equilibrium. A cooler atmosphere means a higher radiative imbalance, more energy going into the deep oceans, and hence a shortening of climate transients, of total adjustment to imbalances. Atmospheric temperatures may be (temporarily) lower, but the heat content of the climate as a whole (93% of which is the oceans) increases faster in the presence of GHG forcings and a cooler atmosphere.
The difference between incoming and outgoing climate energy, and heat content change over time, is simply larger under circumstances of higher advection.
JCH: “…ways warming oceans might change ENSO…”
Sorry, I misunderstood the question.
The only paper I remember reading of possible relevance (it was speculative about the cause) was about “El Nino Modoki” a couple years ago (Science or Nature). I suspect you have seen it already if it is relevant to your search. If not I can find it for you.
One also could start the trend analysis from 1999. This takes the El Nino of 1998 out of the picture. 1999 and 2000 were La Nina years. So, in this case, one can’t really say the choice of the start year favours the result toward statistical significance of the difference between the trend from 1975 to present day and 1999 to present day.
One still gets some weak/medium statistical significance above 70% or 80% of the difference between the surface temperature trends, depending on the data set. One also gets some weak significance for the difference relative to the Zero-trend in some of the sets. The ones who claim a “pause” asserting the trend had become not statistically distinguishable from Zero.
Trend analysis for recent warming:
The model-data differences for the 3 datasets are shown in Figure 4, using global latitudes, 90S-90N. All data have been smoothed with 5-year running-average filters. It should come as no surprise that the GISS- and NCDC-based curves are so similar—they use the same sea surface temperature dataset, NOAA’s ERSST.v3b. On the other hand, the HADCRUT4 data use the recently updated HADSST3 dataset. There’s another reason for the differences between the HADCRUT4 curve and the others: missing data is not infilled in HADCRUT4, while it is infilled in the GISS and NCDC products.