One of the favorite criticisms harped on by deniers is that global temperature isn’t rising as fast as computer models have predicted. So far, comparisons have shown that observed temperature is on the low end, even skirting the significantly low end, of model results. They generally use this to imply, or say outright, that not only are models “wrong wrong wrong” but the whole of climate science is “wrong wrong wrong.”
Of course it might be a valid criticism of the models, but not of global warming theory which most decidedly does not depend on complex computer models. The models are just our best way of forecasting what the future will bring; they aren’t necessary to understand, or confirm by many observations (not just temperature data), the physics behind man-made climate change.
But it’s still important to understand why models are diverging from observations, because models really are our best tool for knowing what to expect. A new paper by Cowtan et al. has identified one of the crucial reasons, namely, that models aren’t diverging from observations by nearly as much as has been believed so far.
How could that be? It’s simple, really, when you realize that so far people have been comparing apples to oranges. Global temperature from model runs is the global average of surface air temperature (SAT), but global temperature from observational data is a blended average of surface air temperature over land and ice regions with sea surface temperature (SST) over open ocean. One thing we know, and have known for a long time, is that not only are SAT and SST different, they’re exhibiting different trends.
When I heard about this, my first thought was “Of course. Why didn’t I think of that?” Because it really is one of those things that’s obviously true — obvious, that is, once someone thinks of it.
A good summary of the situation was given by one of the co-authors, Mike Mann:
A number of us had independently noticed that at least some of the apparent discrepancy in past comparisons of observed and modeled warming appeared to be an artifact of an apples-and-oranges comparison: Observational global average temperatures employ sea surface temperature (SST) over the oceans, while model-estimated temperatures have typically used surface air temperature (SAT) over the oceans. Since SSTs are warming more slowly than SATs (for physically-understood reasons) that leads to an apparent divergence between the two quantities.
As we learned of each others’ parallel efforts and joined forces, led by Kevin, this turned into a far more exhaustive and authoritative analysis by a team of leading experts. What we found is that it is highly non-trivial to do the comparison right. One key complication that arises is that the observations typically extrapolate land temperatures over sea-ice covered regions since the SST is not accessible in that case. But the distribution of sea ice changes seasonally, and there is a long-term trend toward decreasing sea ice in many regions. So the observations actually represent a moving target. To do this right requires treating the model temperature field in precisely the same way as the observations, which means using a time-dependent land/sea mask!
So suffice it say that past comparisons of observed and model-predicted warming (including e.g. those shown in the most recent IPCC report) haven’t quite been correct. The apparent divergence between model- and observed warming appears to be in substantial part an artifact of. Doing the comparison properly, we reconcile a large chunk (38%) of the discrepancy. The rest can easily be explained by other factors that have been examined in recent work, e.g. errors in the radiative forcing used in the model simulations and the fact that the models and observations have experienced different realizations of internal decadal variability.
Indeed, while the central idea (compare like to like) is simple, getting that right isn’t. One of the difficulties is the switch from SAT over ice-covered ocean to SST over open ocean when the ice melts, a change which is certainly time-dependent and is itself trending. Another is that most global temperature estimates from observations are based on using anomalies rather than absolute temperature. This too is related to sea ice, because anomalies are based on long-term average conditions, and the fact that seawater temperature underneath ice changes almost not at all, being constrained by the freezing point of water. As sea ice has declined, this has introduced a cool bias at the point where the ice melts.
The situation is further complicated by the fact that for proper comparison, one should process model data in the same way as the observational data set to which it’s compared. Some use interpolation for infilling sparsely- or un-observed regions, others simply mask out those regions. Then there’s the issue of sea ice; should one use monthly averages as though sea ice were constant throughout the month, or allow for monthly variability in sea ice cover? The authors tested several choices, paying particular attention to emulating as closely as possible the procedure used for the HadCRUT4 data.
Their figure 3 illustrates the magnitude of the effect:
Figure 3: Difference between global mean blended temperature and air temperature, for different variants of the blending calculation, averaged over 84 historical + RCP8.5 simulations. Blended temperatures show less warming than air temperatures; hence the sign of the difference is negative for recent decades. Results are shown for the four permutations of masked versus global and absolute temperatures versus anomalies (with variable sea ice in each case). Two additional series for the absolute and anomaly methods with fixed ice show that fixing the sea ice boundary eliminates the effect of using anomalies. The final series shows the HadCRUT4 method, which shows similar behaviour to the other anomaly methods.
All by itself, doing an apples-to-apples comparison reduced the discrepancy between models and observations. When, in addition, one re-computes computer model results using the fact that estimates of climate forcing have improved since the runs used for IPCC reports, the discrepancy between models and data is considerably reduced.
Of course this is bad news for the deniers, who will find one of their favorite criticisms undermined. I expect a hissy-fit to follow. But it’s good news for the rest of us, because it means we can have more confidence in our best (albeit certainly imperfect) forecasts of what to expect in the future, and what will be the consequences of the actions we choose to address the growing problem. If only we can get the U.S. government to address the problem at all.