A new paper by Wilson et al. combines data from paleoclimate reconstructions over the last 1000+ years, by a number of different researchers, to create a sort of “consensus” reconstruction. Hence the name: Northern hemisphere TREe-Ring Network Development, or N-TREND. As pointed out by And Then There’s Physics, it’s not groundbreaking or earth-shattering, it pretty much tells the story we already knew.
What’s different about this effort is that it’s limited to tree-ring data. I agree there’s merit in focusing on that specific proxy, but it highlights some weaknesses of the approach. If you want a global or even hemispheric temperature history, and if you want it to cover each full year, tree rings are far from ideal. They tend to reflect seasonal temperature most strongly, and they’re most useful for a limited latitude range. That’s why the paper makes it clear that the new combined reconstruction only covers latitudes from 40N to 75N (which only covers 16% of the globe), and only reflects temperature during the four months from May through August.
One advantage of tree rings is that they do give annual resolution, which makes them good for identifying changes on a time scale of a few years, such as the impact of volcanic eruptions. Nonetheless, it seems to me (and to many in the research community) that the advantages of multi-proxy reconstructions (greater geographic and year-round coverage, to name a few) outweigh the disadvantages.
Their reconstruction gives this:
They smooth it using a 20-year spline, which gives results nearly identical to other methods on the same time scale, for instance a lowess smooth:
Even with this relatively brief smoothing time scale, it emphasizes the “hockey stick” nature of climate change.
I prefer a 30-year time scale, more in line with the traditional time to define climate. It doesn’t look much different; here are 30-year moving averages together with a 30-year time scale lowess smooth:
What I find most interesting is to smooth on a more centennial time scale, or even longer. This can have the drawback of failing to capture some of the more recent, faster changes. But I’ve been playing around with various smoothing ideas, and one I’m toying with is what I’m calling an “optimal spline” smooth (someone has probably already invented it, but I’m having fun with it). Here’s a centennial-scale lowess smooth, compared to 30-year running means:
Here’s a new approach:
That’s a hockey stick for you.
As far as the global warming discussion is concerned, the salient point is this: that all the foregoing make it abundantly clear just how utterly stupid is the too-often repeated idiocy (a favorite of Ted Cruz and other deniers) that “climate is always changing.” Of course it is. But not like it is now. The medieval warm period is there (no surprise, especially since this reconstruction only covers northern-hemisphere mid-latitudes). The little ice age is there. So too is modern, man-made global warming, starting around 1850 — and it rather dwarfs those other events.
I suppose I could point out that of course politicians have always been idiots. But not like they are now.
If you like what you see, feel free to donate at Peaseblossom’s Closet.
The first thing I noticed was how recent the graphs run. I wondered if instrumental data were spliced in at the end to make up for “the divergence problem”. Thanks to the link to the paper, I was able to quickly search for “divergence” and found this:
“the N-TREND2015 reconstruction shows
almost no late 20th century divergence (D’Arrigo et al., 2008 e see
also discussion in Wilson et al., 2007) from the instrumental data
(Fig. 2DþE) and is statistically more robust for a longer period of
time. In this respect, N-TREND2015 therefore represents a substantial
update of these earlier studies”
Very interesting!
“The little ice age is there. So too is modern, man-made global warming, starting around 1850 — and it rather dwarfs those other events.” I agree partly. One could argue, that the record shows a warming of about 0.5 K from 750 to 900 ( due to what forcing?) and this in the “green” of natural variability. So the increase in temperature to 1950 would also be in theis “green”. The strong GHG-forced trend would start then after 1970 or so, see your figure one.
Frank,
I would be careful about taking trends too near the beginning of the dataset too seriously. Error bars are high there, and we don’t know what temperature was doing just before it started to climb.
Almost the same result with another methode: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0146776#pone-0146776-g004
For some time now have been using a smooth that attempts to show a best estimate of the rate of change as a tangent to the smooth. For this data the method almost exactly reproduces your Lowess smooth shown in your 4th fig for the 30 year moving average.
To construct if I take a linear slope over typically 60 years although for this data I used 100 years. I do not like using the method with less than a 60 year interval. The motivation here is that up to 100 years is roughly quadratic because climate changes so slowly and the formula I use for slope at the mid point of the range for the quadratic is the same as the linear LMS.
Construction is very simple for annual data especially on a spreadsheet which I often use:
a) calculate the slope at each year b) the temp at each year is then the cumulative sum of previous slopes c) finally add an offset so the average of the smooth and the original data are the same. d) There is of course an end point problem since the method requires taking a slope at the midpoint of a range and I only have half a range. I just assume the best slope at the end is the slope of the previous years. Keeps the method very simple and I still reasonably accurate as the smoothing morphs from assuming the data is quadratic to being linear as the time frame shortens.
Digging into it I found the method is equivalent to using a weighted mean with a parabolic weighting function except near the ends of the time series. The exact form of the weight ensures the condition that the tangent to the smooth is the linear slope at that time. I do not think it surprising that the outcome is similar to the Lowess. As is obvious from my method of construction I do not bother to calculate a weight.
I have been hoping you might produce an article where a comment explaining this procedure would not appear too out of place and ask if you think the method reasonable.
I have always been curious as to the justification for the Lowess method (I found it rather artificial and have not succeeded in finding an article explaining it). The similarity between this method and the Lowess being the best I could do myself and was hoping you might comment on that as well.
I bet a Kalman smoother would produce a result similar to the optimal spline, but it will depend in part on what’s chosen as ratio of observation variance to state innovation variance. For a local-level model (about as non-committal as one can get) 10:1 is a good start.
“I suppose I could point out that of course politicians have always been idiots. But not like they are now.”
So, that sounds like another hockey stick graph in the making….