Quite a bit of attention was paid to the fact that in the global temperature data from NASA, and from NOAA, the year 2014 turned out to be the hottest on record. Even more attention ensued because so far this year has also been rather toasty world-wide, so much so that by February we set new record highs for average temperature over a 12-month period (sometimes called 12-month running means). Simply put, March 2014 through February 2015 was the hottest 12-month period on record in both NASA and NOAA data. Then when the numbers arrived for March of this year, it turns out we topped even that record; April 2014 through March 2015 became the new hottest 12-month period on record, for both NASA and NOAA data; here’s the NASA version (click the graph for a larger, clearer view):
Let’s put a red dot with an “x” through it on each month we set a new record for hottest 12-month period yet (click the graph for a larger, clearer view):
Many of the dots are crowded together; here’s a close-up of data since 1970 (click the graph for a larger, clearer view):
This made me curious to find, not just the times when we first reached a new high temperature world-wide, but the times when we last saw a low temperature world-wide. In case you too are curious, here’s the graph:
I do feel compelled to mention that all this talk of hottest this and coldest that means little. What really counts is the trend. Here are two estimates of the actual trend, one by a lowess smooth, another using a piecewise-linear fit:
Clearly the two trend estimates are in very good agreement. As for which is closer to the truth … that’s an open question which will remain open. But I will say that these smooths represent just about all we can really say we know about the trend. It might have shown more complex changes, but then again it might not. All we can say with confidence (in the statistical sense) is what is shown on that graph.
For the curiosity of the curiouser, we can put the two together:
One question: how many data points are approximately used for each lowess segment?
“This made me curious to find, not just the times when we first reached a new high temperature world-wide, but the times when we last saw a low temperature world-wide.”
Do you mean a”last saw a “new high” low temperature?
Tamino’s reversing time for the lows. In other words a new high is higher than anything that came before, but in this chart a “new low” is lower than anything that comes after. There might be a better term that could be used. Compared to some of Tamino’s other great visualizations, I don’t think that lower bound is a good one because its past shape can change in the future (say we have a really cold month – that could erase several points from the lower bound line).
That’s a clean way to conceptualize it. When the trend is a warming one, ‘cool records’ will inherently appear time-reversed. (Not that I thought of it that way, till Greg pointed it out.)
btw: basically, the mean temperature data points themselves are the only truth at hand, how many measurement and averaging errors they may contain. no smoothing whatsoever is true, as long as we do not define a criterion of truth. and then truth is not binary but relative – some smoothings are truer than others.
when i think about it, it seems to me that the predictive value or skill is the best criterion for “truth” for a smoothing algorithm. It should be possible to test algorithms for that criterion using known data.
thinking further along that line, the algorithm with the best predictive value would be a simplified climate model with a small set of parameters to make the fit, which should produce a sufficiently smooth prediction. wonder whether this is possible or makes sense.
> a “new low” is lower than anything that comes after.
Hm. Not “unprecedented” — the opposite of that: “low not seen since” …
I nearly blogged on this last week, then thought I was too late. I often have my nose so close to the ‘tree’ that is Arctic sea ice that I fail to see the ‘wood’.
Seems I was wrong, it was worth blogging on. Thanks for the post Tamino.
It is worrisome that the red and blue dots converged. (Each represented the limits within which the climate system oscillated.)
Convergence of upper and lower temperature limits set the stage for a more rapid change in mean temperature.
This system has gone out of control —> warm. On the other hand, we can stop worrying about when the system will go out of control because here we have an excellent graphical solution showing the tipping happened circa 1970.
That is, by ~1990, we no longer have those extra cold years to make extra ice for the extra warm years. And, after ~1995, we have a lot of extra warm years.
This is a simple model. where any possible data rounding and smoothing errors apply to both control limits and thus are not likely to affect the difference between the control limits in any given period.
What leapt out at me was that the “new records” tend to come in groups – a consequence of “spikes” of warming. How much further has the current spike got to run? And if the convergence of limits noted above is real, where will the new low fall?
[Response: The convergence of limits is because the lows converge on the present value — so it’s really just an artifact. But it’s an interesting plot.]
Happiness is a new Tamino post!
This one’s a beauty. Thanks, T.
The gap looks to be a bit thinner towards the end. Wonder if that is expected to continue.
Well, my Mk1 eyeball suggests that the peak to trough amplitude of the series has reduced in recent years (with 98 an exception). Might this not be expected as the ever-increasing GHG forcing begins to dominate over natural variation?
Vintage Tamino! Agreed, the final graph is fascinating — and clearly ‘The Pause’ ended in 1970, that’s 45 years ago folks! So the next time someone in the denialosphere says ‘What about the pause.’, we can say “This isn’t a pause…. That’s a pause!” with a link to that graph.
(Apologies to Crocodile Dundee https://www.youtube.com/watch?v=POJtaO2xB_o)
Reblogged this on mt's Science Blog.
I know it’s not your “thing”, but could you speak to us sometime about Bayesian smoothing, for example as used by this Australian poll aggregator? http://marktheballot.blogspot.com.au/2015/04/aggregated-polling-update.html
For that noisy and much less autocorrelated dataset, he claims Bayesian smoothing has advantages over lowess — mainly to do with the “wiggly end” problem. It also produces a nice dynamic uncertainty band. (But then he goes and applies a crude long numerical smooth to the Bayesian result…)
I would like to use the piecewise linear fitting in my work. I know you posted on this a few years ago. Can you point me to that earlier more complete explanation?
Yet another great post.
[Response</b: Here: https://tamino.wordpress.com/2014/12/09/is-earths-temperature-about-to-soar/ ]