In the last post we looked at the recent temperature reconstruction for the holocene, in particular the last 11,300 years, from Marcott et al. We noted that the changes over most of this very long time span were no bigger, but a lot slower, than the changes over the last century or so. That means trouble.
We also mentioned that the “uptick” at the end of their “main” (the “Standard 5×5″) reconstruction was much larger than in their RegEM reconstruction, and that they had expressed doubt about its robustness. The large uptick at the end (in 1940) is larger than indicated by the instrumental data — another reason to doubt its reality. Let me tell you my opinion why this difference exists. I could be mistaken, but this is what I think.
As I hinted earlier, it has to do with proxy drop-out over time. All 73 of their proxies cover the time span 5500 to 4500 BP (calendar years -3550 to -2550), but as time marches forward the number of proxies which still have data dwindles, especially toward the end of the reconstruction (1940) by which time only 18 proxies remain.
Suppose we were doing something similar for the years 1000 to 2000 AD, we had only 3 proxy data sets, and we aligned them during their period of common coverage, from 1000 to 1800 AD. One proxy ends in 1800, another in 1900, only one goes all the way to 2000. Let’s then compute their simple average (one of the methods used by Marcott et al.). We might get something like this (proxies as black lines, their average as red dots):
Note that the proxy which drops out in 1800 is the coolest of the three at that time. Hence when it stops, the average of what remains is artificially high after that time. This causes an “uptick” which is just an artifact of the averaging process.
The next dropout, in 1900, is again the cooler (at that time) of the two remaining proxies so it leads to another uptick which is just an artifact of the averaging process.
Now let’s look at the individual proxies from Marcott et al., aligned over the common period (5500 to 4500 BP), which go at least as far as the year 1800 (click the graph for a larger, clearer view):
In this graph, the black dots connected with lines are the individual proxies interpolated onto a 20-year time grid (as per the Marcott procedure). But when any of the proxies ends prior to 1940, it’s also circled. The circle is red if the proxy is “warm” when it ends and blue if the proxy is “cool” when it ends.
Below the individual proxies is their average, magnified and offset so that it doesn’t overlap the proxies themselves, but does show how the average changes over time.
Note that whenever “warm” proxies drop out it tends to decrease the average immediately after that, and when “cool” proxies drop out there’s an increase in the average immediately after that. For instance, three proxy series end in the year 1900 and two of them are on the “warm” side at that time. Therefore what remains is not so warm so the following average (for 1920) is a bit lower than its predecessor. Five proxy series end in the year 1920, and four of those are on the “cool” side at that time. Therefore the following average (for 1940) is notably higher than its predecessor. In fact, the sign of every change in the average after 1800 matches what would be expected according to whether more “cool” or “warm” proxies drop out just before the change.
That, I believe, is the reason for such a large “uptick” at the end of the Marcott et al. standard reconstruction. The dropout of “cooler” proxies introduces an artificial warming into the result. The RegEM reconstructions infill the missing data, so they don’t suffer the same fate and don’t show the large uptick. They do show an uptick — but not nearly so large.
From what I’ve seen, this problem is only important at the end of the reconstruction. The outstanding agreement between the “standard” and “RegEM” reconstructions before then rules out any profound effect of proxy drop-out on the reconstruction prior to about the 20th century.
Incidentally, there’s another way to ameliorate the problem. Nothing can solve it perfectly, because proxies do drop out. But one partial solution is to transform the proxy data sets to their differences. Then at each time step we compute the average difference. Finally, we sum the average differences to get the average temperature. I did this using the Marcott proxies, and compared it to the result using a straight arithmetic average:
The “big picture” is unchanged — temperature changes over the last 10,000 years have been no larger, and much slower, than what we witnessed in the 20th century (as reported by thermometers). But the reconstruction since 1740 is quite different:
Most of the changes in the standard average are due to proxy drop-out. The reconstruction by the differencing method still shows an unmistakeable uptick, but it’s no longer of unbelievable size. If we compare the reconstruction by differencing method to that using RegEM, we see that they’re similar:
In particular, their end-of-reconstruction upticks are rather similar:
The fact is that the uptick in temperature at the end of the reconstruction period is a real feature of the proxy data used in this study. While the size of that uptick is inflated in the “Standard 5×5″ reconstruction, it’s still there and it’s still real. The only real mystery is why the hell anyone should be surprised or disturbed about this. After all, we already know what happened in the 20th century.
Incidentally, a great deal of bullshit has been spread around about the re-calibration of proxy ages by Marcott et al. The truth is, it makes very little difference to the essential result. I computed the reconstruction by the differencing method using the “published” (i.e., as reported in the original sources) proxy ages to compare that to the result using the recalibrated ages. Here’s the result:
The main difference is in the distant past, when the recalibrated ages indicate an earlier end to deglaciation than the originally published ages. That long in the past, I suspect that using the latest radiocarbon dating calibration is a distinct advantage. For the last few centuries, the recalibrated ages give only a slightly larger uptick than the originally published ages:
As for the very large uptick in Marcott et al.’s “standard 5×5″ reconstruction, I quite agree it’s not correct but even Marcott et al. expressed doubt about it. More to the point, it is not the point of the Marcott reconstruction. The point is to define the extent and rapidity of changes throughout the holocene, in full knowledge that the most recent part is the least accurate because it has the fewest remaining proxies. For that purpose, all the reconstructions (including by the diferencing method) agree.
As for the entirety of the Marcott et al. reconstruction, two points cannot be overemphasized. First: the point is to reconstruct temperature change over the entire holocene, especially the past. This is hardly the final word on that subject, but it’s a good first step and a very strong indication that past changes didn’t happen as fast as what’s happening now. The exaggerated uptick in the “Standard 5×5″ reconstruction is its least interesting feature, but it’s the most annoying to those who have an ideological reason to deny man-made global warming.
Second: we already know what happened in the 20th century.
“Grant, I find it just plain bizarre that you wrote all this and never even mentioned Steve McIntyre, who first figured out what Marcott had done wrong, and whose excellent work is the whole reason you wrote this.”
For your information, Davy boy, McIntyre’s contribution to this was limited to his every effort to discredit the entire reconstruction, to discredit Marcott and his collaborators, and of course his usual knee-jerk spasms at the sight of anything remotely resembling a hockey stick, sprinkled literally with thinly veiled sneering.
Also for your information, the original version of this post mentioned McIntyre (and linked to his posts) extensively. But prior to posting I decided to remove that, since McIntyre had already fully explored the “low road.”