A few days ago I learned that JMA (the Japan Meteorological Agency) had released their global temperature data for October, according to which this year’s October beat the pants off any preceding:
I wasn’t sure what other organizations would report, but NASA just released their October update — and this year’s October beat the pants off any preceding:
We’re also on track — in fact it’s almost a lock — to set a new record hottest year. Here’s the year-so-far compared to preceding years:
By the way, this month wasn’t just the hottest October temperature anomaly on record in NASA data. It was the hottest month, period.
What do you think — will congressman Lamar Smith try to issue a subpoena demanding all the emails from scientists at the Japan Meteorological Agency?
Thank you for posting this.
[Response: Your comment about it (on another thread) arrived after I had written this, but before I posted it. Which makes me glad, that the word is out there and is getting noticed.]
This is probably something trivial, but if October is the hottest month ever at about 0.81 K, then how can the year to date anomaly be over 1.0 K? Different baselines, or maybe measuring different things?
[Response: It’s the hottest monthly anomaly ever in NASA data, not at about 0.81, but at 1.04. The year-to-date for NASA isn’t over 1.0. And, NASA and JMA use different baselines.]
Greg,
You may be referring to the recent UK Met Office announcement that 2015 will be the first year >1.0C above the average ‘pre-industrial’ global surface temperature. UKMO baseline their anomaly for ‘pre-industrial’ on the period 1850-1900 using the HadCRUT4 data set (presently -0.31C): http://www.metoffice.gov.uk/hadobs/hadcrut4/
To end September, global surface temperatures in HadCRUT4 are 0.70C against the 1961-1990 average, which is the official WMO anomaly base as used by the UKMO for current global surface temps. Subtracting this from -0.31 gives 1.02C above ‘pre-industrial’. Looks very likely that October will add further to this anomaly and that 2015 will indeed be >1.0C above the average temperature experienced in the pre-industrial world.
FWIW, the mean anomaly of the year to date is 0.822, according to my calculator chops.
http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
Tamino, you might want to correct your text to ready “hottest mostly anomaly” instead of “hottest month”, which is confusing.
A related question: Why is Roy Spencer’s satellite trend series so out of kilter with all the surface measurements?
http://www.drroyspencer.com/latest-global-temperatures/
Whenever NASA or whoever publish new record-breaking temperatures, skeptics can simply point to the satellites.
I’ve posted a link in the previous story that in turn contained a loooong link…
https://scontent-frt3-1.xx.fbcdn.net/hphotos-xfp1/v/t1.0-9/12227641_929761083764729_8414210017920648531_n.jpg?oh=757f682a14d052bb246fc4f1f46f9710&oe=56F49EF1
…to a RSS graph for water vapor in the atmosphere. Which makes me wonder whether those microwave based calculations of RSS temperature sets simply leave out all that energy in the atmosphere contained in that water vapor (…which would translate into temperature, IMHO).
Now, I’m manifestly no expert, but could it be that the RSS ‘temperature’ needed to be reconciled with and adjusted for those vapor measurements?
Oh the irony – could it be that Monckton had been wrong all along?! Could it be that there even hat to be “evil” adjustments made to the RSS records? :)
What do the experts here think or know?
And then there’s the question of where do most of us actually live….in space where the satellites live, or on the surface of the earth where we, and actual thermometers do?
If they competently point to the satellites, they will notice a warming trend a little smaller, but in the same range as the surface record.
Of course asking a denier to perform competent stats is asking a bit much.
Satellites don’t measure surface temperature. They measure mid-troposphere, and even then, they aren’t measuring temperature directly. They are measuring brightness in regions of the spectra from which you may infer temperature, but only over a range of altitudes rather than one specific altitude. In fact, part of what made it possible for UAH to infer the absence of warming at one point was that they were using a part of the spectra that included both the warming trend of the troposphere and the cooling trend of the stratosphere — with these trends largely cancelling each other out. Another was that they weren’t properly taking into account the time of day during which the measurements were being taken.
Then there is the issue of orbital decay – where to properly interpret the readings you have to take into account how the orbit has changed over time. Then you need to consider the fact that give the limited lifespan of the satellites, satellite-based temperature records have to be stitched together from different satellite records, and the satellites that provide those records won’t necessarily match up. Their instrumentation and orbits are likely to differ.
From what I understand, the UAH temperature record has been beset by a host of errors, and each time one of those errors have been corrected the trend was towards warming rather than cooling. RSS has typically been closer to the ground-based temperature record, but still, there is a great deal of modeling behind the so-called empirical, satellite-based temperature record, and even setting aside questions of accuracy, comparing it against the surface, where we live, is comparing apples to oranges.
II was going off memory but wanted to provide a is a good source. This morning I thought, “I know!”
https://www.skepticalscience.com/satellite-measurements-warming-troposphere-advanced.htm
That’s a very good question, that I’d like to see some discussion of.
Also, why does the most recent revision of the UAH temperature series push up temps prior to ~2002 by about a third of a degree, and pull down temps after ~2005 by the same amount? Without that recent ‘adjustment’, it might even show 2015 as being hotter than 1998…
Note that Spencer is being a little presumptuous with his use of version 6.0 of his UAH TLT data. It is still in beta form but he probably cannot resist using it because it yields far lower trends than his 5.6 version. The two versions 5.6 & 6.0, as well as differing in method, represent vastly different height profiles. While these satellite data include readings from ground to stratosphere, they are weighted to measure different altitudes. v6.0 averages 4,500m altitude so to call it TLT (Temperature Lower Troposphere) is a bit of a presumption. It really only differs from the mid-troposphere data by reducing the contribution of stratosphere data. (The RSS TLT is also high averaging at 4,200m.) Thus to compare UAH TLT with the thermometer record as though it were another surface measurement is a step too far.
Spencer is certainly having a bit of fun with his v6.0. The adjustments between beta2 and beta3 show a large divergence over the last 5 years, with beta3 gaining 0.03ºC of warming over the period including a prominent seasonal cycle showing even greater adjustments for individual months. That’s rather a lot to accept given v6.0 is being paraded as though it was a finished product. But Spencer is a known rogue who has done far worse. (For instance, he used to fit a fantasy cyclic trace to his TLT graphs, apparently “for a bit of fun.”) From the size of these adjustments over a specific part of his data, I’d assume Spencer has managed to mess up his satellite orbit calculations again.
An interesting question. The wikipedia article on UAH makes comparisons with other data sets using a previous version of UAH, 5.4, and does not say anything about the latest version, 6.0 beta which has a much lower trend, especially recently, than versions 5.4 or 5.6.
Version 5.6 reputedly agrees with radiosondes but that cannot also apply to versions 6.0 beta since versions 5.6 and 6.0 beta diverge so much from each other.
These issues are no problem to Roy Spencer and his acolytes as version 6.0 beta tells them what they want to believe.
Spencer’s October is actually pretty high already, considering the lag in troposphere for El Nino impact.. Probably first quarter next year will show quite different score.. Provided that there will not be some new version 7 (or something) temp series from them with strange new adjustments..
The satellites are not measuring surface temperatures – they are measuring the lower troposphere.
It’s also useful to remind the deniers that UAH is subject to a lot of statistical homogenisation and processing.
http://www.drroyspencer.com/2015/04/version-6-0-of-the-uah-temperature-dataset-released-new-lt-trend-0-11-cdecade/
I had a similar question a few posts ago. Surface temps are high this year owing to the strong el Nino event (on top of the long-term trend). El Nino seems to lag in the satellite records some months after it shows as warmer temps in the surface records. The big el Nino of 1997/98 began around May 1997, but did not begin to show in the satellite record until December 1997, while surface temps had been warm and getting warmer from the middle of that year, anomalies for both data sets peaking in 1998.
ENSO (El Nino Southern Oscillation) indices:
http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml
UAH satellite record monthly data:
http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.6.txt
NASA monthly surface data:
http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
If the lag time is consistent, we might see the satellite data spiking any time from now: from this month, perhaps, as the current el Nino started slightly earlier in the year, but likely by January. If it doesn’t show up, that might suggest something is amiss with the satellite data sets, as the last year’s high monthly temps show up in all the surface data sets – including Japan’s, which looks set to break the annual record this year, along with the others.
Just point out to the “skeptics” that the satellite record is actually the output of a computer model, and they’ll lose all interest in it.
Sorry to say but it does not work that way.. For a cherry picker whatever looks today supporting their view is always right. And they do not see any inconsistency in their praising the virtues of satellites for temperature measurements (by removing data points older than x years they can show horizontal trend line – which actually has quite low statistical significance, but that typically is too complicated for them to understand) while at the same time claiming that satellite altimetry is flawed (it shows sea surface levels rising). Whatever goes seems to be general logic.. Of sorts..
predacious plum: It’s not just UAH, but also RSS that has been lagging the surface temperatures. But that lagging is most pronounced starting around 2000, when I believe new satellites were brought into service. Tamino has a post somewhere showing the satellites started diverging also from the radiosondes (balloon borne instruments) at that time, with the radiosondes continuing to much better match the surface trend.
Concocting lower troposphere temperatures from satellite microwave receipts summed of the entire column of the atmosphere is fraught. For one thing, the stratosphere cools with increasing CO2, so radiative transfer / climate models must be used to estimate how much to add in order to offset that cooling as observed in that whole-column-sum. There are lots of other factors, such as satellite orbit changes, instrument aging,…. UAH and RSS until recently disagreed with each others’ trends by 3X despite using the same satellite raw data.
Then there is the fact that the “lower” troposphere whose temperature is being estimated, is centered about 2 km up. It’s a bit uncertain whether that layer is expected to warm at the same rate as the surface. We do know that the lower troposphere temperature has higher variability than the surface temperature, for example responding to El Ninos and La Ninas more dramatically.
UAH and RSS traditionally have lagged major ENSO events by about six months. So they might spike in about March, exceeding their response to the 1997 El Nino, and establishing their parallelism with the surface temperature. Or they could fail to spike like they did in 1997/8, which would be a strong indicator that something indeed went wacky with the satellites around 2000.
Yes, satellite data has shown greater response to ENSO so, using the same methodology as previously used, I would expect them to show a temperature rise in assocation with the current el Nino that exceeds that of surface measurements.
Apples and oranges. Surface temps are not tropospheric temps. Dr. Roy’s latest revision of his dataset subtantially moved the sensitivity of his algorithm to points higher in the atmosphere, where the warming trend is lower (and in even negative at the uppermost end).
Compare magenta line (old) to black line (new) and the dotted line (trend at altitude, balloon measured) to see what’s happening. With this broad altitude sensitivity, UAH *must* have a lower trend than the surface. It’s baked in.
I wonder if Rep. Lamar Smith will subpoena Dr. Roy’s emails to find out why he made that change in Revision 6.
Naaaah.
We can only hope…
Oops, forgot the link:

So it is the highest monthly anomaly, not the hottest month? I think you’ve had an article on this that the hottest month is usually July.
Regarding satellite data, there was a study in March that corrected the UAH data for diurnal drift and got a much greater rate of warming in the troposphere that was close to other data sets. Although the new study agrees with some aspects of the RSS data, we’re left with just that data set as being an outlier. I think that’s proprietary data, though, but I hope that can be properly analysed some time.
Mr Plum, here is an explanation from John Abraham: http://www.skepticalscience.com/uah-lowballing-global-warming.html
Where has NASA made their data available? The normal Global Land-Ocean Temperature Index I keep an eye on still doesn’t have an October update as I write this: http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
[Response: That’s where I got it. Something I learned a while ago is that some web browsers cache pages, and present the cached version so you may not get the update. If you hit “refresh” after loading the page, it will fetch the update if one exists.]
I cleared my cache a few days ago, and I’ve got the October anomaly showing.
@predaciousplum
Is this something to do with the way it treats El Nino, so we can expect it to increase rapidly over the next few months and eventually top 1998?
Flicking through the monthly plots at Japanese Meteorological Agency, my eyeball tells me that 2015 to-date is the hottest year in that record, too. 2 months to go.
http://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/ann_wld.html
(I know not to trust my eyeballs, but 7 months out of the last 10 have been the warmest on record – the last few months significantly so)
Looks to me like someone’s shortly going to win a bet …
“If annual average global temperature anomaly (land+ocean) from GISS exceeds 0.735 deg.C for two (not necessarily consecutive) years …”
See:
http://web.archive.org/web/20080409021707/https://tamino.wordpress.com/2008/01/31/you-bet/
PS. Tamino, feel free not to display this post. I don’t want to steal your thunder.
OT, or maybe tangentially related, but I’ve just put up a new article on climate predictions and the success thereof:
http://hubpages.com/politics/Climate-Change-Predictions-How-Accurate-Are-They-Really
Suggestions, editorial observations and support in the comment thread would all be welcome–particularly since the project is a ‘challenge’ with a skeptic ‘Hubber’ and should be an ongoing conversation.
The NASA/GISS October anomaly map is certainly consistent with what I experienced here in San Diego — heat/humidity levels (in particular, abnormally high overnight lows) that we simply shouldn’t experience in October.
It wasn’t just my imagination, either. A co-worker of mine who recently moved out here from Virginia got to complaining about the sticky weather (“What’s the deal with this? I thought that San Diego weather was always pleasant!”).
1.04 C is certainly pretty eye-popping, and is rightfully the headline number; looks to be the first-ever GISTEMP anomaly above 1 C (Jan of 2007 hit 0.97, but that’s the highest I spotted in a quick scan of LOTI.) Speaking of headline numbers, any actual MSM headlines yet?
Regional variation remains, though–and ‘Of course!’ The US analysis from NCIE–formerly NCDC–is interesting. Fourth-warmest US October on their record, with Washington state (again!) setting state records, and 14 states much above average. Here in the Southeast, though, it’s been more or less average:
I’ll be looking to see what the upper air analysis shows when the NCEI global analysis comes out–as well, of course, as their headline number!
Answering my own question, the WaPo is on it:
https://www.washingtonpost.com/news/capital-weather-gang/wp/2015/11/17/record-crushing-october-keeps-earth-on-track-for-hottest-year-in-2015/
And to my surprise, WUWT allowed as how UAH racked up their warmest October ever:
http://wattsupwiththat.com/2015/11/03/global-temperature-report-october-2015-warmest-october-in-the-satellite-temperature-record/
I keep reading pleasantly factual stuff about global warming at the Washington Post, so I looked up who owned it (I couldn’t believe that a Rupert Murdoch owned paper would publish AGW stuff without a denialist spin). Its Jeff Bezos of Amazon fame. Good on him.
Wapo has several real meteorologists on staff–see “Capitol Weather Gang”.
For those of us in the DC area, Capital Weather Gang is the place to go for weather/climate news and analysis. I saw Jason Samenow, who authored the linked piece, on a panel with Katharine Hayhoe this past spring.
An interesting thing about the monthly anomlies at JMA is the smooth growth each month from May on from about 2012. To force something like that either we are rolling 13s or the forcing is killing natural variability. (From RR)
Just eyeballing the graph (with all the caveats that implies), it appears that variability is much less in NH spring and summer (April – August).
Wurble,
If the UK is anyway representative of the NH (or perhaps the middle-high latitudes), it is the minimum temperatures that cause most of the “change in variability” (in that minimums gets a lot less variable through the summer). Variability is greatest in February and about equal for both maxs & mins. But while the minimum temperatures show a strong reduction in variability into the summer months (least variability June to August), maximum temperature variability shows a second smaller peak in variability the summer (peaking July). Thus the variability of summer maximum temperatures is twice the variable that the summer minimums and responsible for the majority of the “change in variability” through the year.
Reblogged this on mt's Science Blog.
Giss went public without data from Brazil and Greenland. Yesterday when data from those countries arrived, Nick Stokes TempLSmesh-index (fairly Giss-emulating) ticked up by 0.017 C. Thus, it is not unlikely that GIss October will be 1.06 by next monthly update…
More Brazilian data just came in, TempLSmesh ticked up another 0.007 C.
The potential GISS-rise is now +0.024 (to be added to 1.04 C)
Tamino, it would be fun for you to dismember the claim by one Douglas Keenan that global temperature is purely a random walk: http://www.informath.org/Contest1000.htm. (But I recommend you not waste $10 entering his contest, because it’s probably rigged.)
Apparently Keenan has making such claims for a while. ATTP had a concise retort: https://andthentheresphysics.wordpress.com/2015/08/06/personal-attacks-on-met-office-scientists/#comment-59986
James posted a comment at WTF, pointing out that Tamino already addressed the random walk claim five years ago: https://tamino.wordpress.com/2010/03/11/not-a-random-walk/
While I am no expert, I would suggest that Keenan is a bit of a cretin to think he can hide his underhand methods.
I reason it thus.
If a set of data comprise 1,000 time series that are random walks, the deviations from the start-point would yield a scatter centred on the start-point of some statistical form and of width dependent on what we can call the size of the steps. The size of step can, of course, vary between different series adding a small complication. But Keenan the Cretin is using a more complex random walk, something that will give the look of a 135-year global temperature record. So effectively, it has long and short steps intertwined. Longer steps are trend-like and it is not impossible that Keenan the Cretin has managed to incorporate trends into that random walk which would invalidate his model. But it would take a sight of the model he uses to confirm that.
His trick with the ‘serious money’ bet is that some of his 1,000 time series have a trend added on to the randomness, a trend that he says averages either +1ºC/century or -1ºC/century. This is stated unambiguously. As the trend only ‘averages’ for the series, the trend is an additional source of randomness although this may or may not imply that there is additional randomness at the end of the series.
Now Keenan the Cretin feels he is safe with his $100,000 bet. Nobody is going to be able to unpack the three sets of data well enough to allocate 900 of the 1,000 sets as having trends or not. He is further safe from guessing as the chance of hitting a correct allocation for 900 series as there are so many interwoven series.
But do I spy signs of cheating?
(1) A histogram of the end-points looks ridiculously noisy for a 1,000 population. (It’s a little beyond my pay-scale to calculate the unlikelihood. The trend peaks are quite minor so ignoring them, 0.1-wide buckets for the end-points would represent something like ~0.12sd yet the central eight buckets vary thus:- 57, 46, 54, 47, 48, 59, 39, 32. What are the chances of that?)
(2) If a series includes a trend of either +1ºC/century or -1ºC/century as described, the four most deviant series, when cut of such a constant trend do not look random at all. Least squares puts these de-trended trends as statistically significant to 36, 41, 46 & 65 SDs. By my understanding, that makes it impossible.
Regarding that trend-spotting contest I pointed to: Now I see that the contest itself has nothing to do with global temperature, because all the 1,000 time series that Keenan wants you to analyze are artificially generated by him, some purportedly with trend and some without. So the contest really is rigged, because he can make those series’ trends arbitrarily difficult to detect, and none of them is actual climate data, and probably shares no characteristics of climate data. He might as well challenge everyone to doing more pushups than him. I bet he uses one of these new exercise devices: https://youtu.be/jby0I-zLj9c
Still it would be fun for you to comment on the textbooks he cites as demonstrating lack of trend.
Of course if someone impartial were to prepare the data sets, then it would be an interesting exercise. Actually no, it would be a fairly trivial exercise. If the data sets are actually produced by a denier, they probably just generate them 10,000 times, and choose the ones that best support their argument. Maybe 100,000 times…
OMG, I just read Keenan’s “paper” (linked from his site) claiming that there is no “purely” statistical basis for concluding there is a trend. His argument is Monckton quality, so there’s no point in addressing it. But the two textbooks he cited in his post might actually be interesting to debunk, though I have not read them so they might be just as Monckton-like as Keenan’s argument.
I read his paper a while back and haven’t read it again, but IIRC he used an example of a fair die. By itself it’s quite an interesting example. However, what he seems to completely ignore is that determining if a die is fair requires having a model of how a fair die should behave. That it should be random, doesn’t change this; it’s still a model. So, if anything his example illustrates how you actually need a model to determine if the evolution of a system is in some sense significant. He then completely ignores this when he applies this to the surface temperature dataset and seems to think that we can determine the significance of the temperature record using statistics alone.
lots of comment on net about anomaly tamino, but what was actual global temperature change from sept 2015 ? rgds
ATTP tweeted that Richard Telford posted a series on Keenan’s claim of lack of “evidence” for trend in temperature data, back in 2013: https://quantpalaeo.wordpress.com/?s=doug+keenan. Here’s a tidbit from Richard: “It would seem a mistake to reject a linear trend model that nobody thinks is perfect in favour of an ARIMA(3,1,0) process that violates the first law of thermodynamics.”
Doug Keenan has form as a serial attacker of climate scientists:
https://andthentheresphysics.wordpress.com/2015/08/06/personal-attacks-on-met-office-scientists/
In addition to that search that Tom Dayton mentioned above:
https://quantpalaeo.wordpress.com/?s=doug+keenan
(and which is also mentioned in the ATTP article), I remember first hearing of Doug Keenan as being responsible for that debacle concerning Phil Jones, Wei-Chyung Wang, and the lost Chinese station records that arose out of the Climategate e-mails.
He seems a rather nasty piece of work altogether, if you ask me.
Another view: http://gergs.net/2015/11/hottest-month/