For topics not related to other posts.
I have a general question or point for discussion here related to the sensitivity issue and the equilibrium temperature that the planet might reach per a doubling of CO2 from 280 to 560 ppm. This stems from a back and forth discussion that I had with Monckton on another blog site.
Monckton of course claims he is an expert in the sensitivity issue and claims with much authority that by his calculations that, even after all feedbacks are included, the Earth won’t warm much beyond 1.1 to 1.2C at 560 ppm of CO2, and that we’ve already seen most of this. He bases this partially on the fact that at our current 392 ppm, we’ve only seen about 1C of warming from 1750 or so, and that any additional CO2 will not matter all that much because of the logarithmic nature of CO2’s effect and because the net of all feedbacks will keep the temperature from rising much beyond where we are now..
I thought his entire basis was wrong (for many reasons), but the one I wanted to focus on for this post was about the actual long-term or Earth system feedbacks to increasing CO2. My contention is that, even if we stopped pouring more CO2 into the atmosphere right now and could somehow keep CO2 at 392 ppm that we’ve not yet seen all feedbacks reach their equilibrium point yet (such as cryosphere and biosphere) and they won’t for many decades at least. We know that Greenland and Antarctica are losing net mass, and that there are changes in the permafrost and tundra in arctic areas that are happening and they will take many decades or even much longer to reach any sort of equilibrium. Thus, we still indeed do have more warming in the pipeline just from the CO2 we’ve already emitted into the atmosphere. What this means in terms of Monckton’s claims about sensitivity is that we don’t even know the full extent of the warming we’ve already committed ourselves to at 392 ppm (we could have another 1C already in the pipeline just from our current concentration level), thus we can’t possibly know what 560 ppm would mean. In essence, Monckton’s “expertise” in the sensitivity issue must be seen as nonsense, and anyone who pals around with him and his nonsensical statements (i.e. LIndzen) will at the very least, be tainted by association with this nonsense.
Numbers are always good with subjects like sensitivity. From memory, the 2010 anthropogenic positive forcing (Skeie et al 2011) stood at 3.25 w/sq m, equal to 85% of 2xCO2. To this shortfall from the 100% (so loved by Monckton & Lindzen) must be added two other factors.
Firstly the anthopogeinc negative forcings which give anthropogenic net forcing in the range ~2.75 – 0.5 w/sq m, equal to ~72%-13% of 2xCO2) This issue involves great uncertainty. Note also they do include cloud effects just like Lindzen’s beloved feedbacks do but here resulting from aerosol emissions. Increases in cloud shininess usually appears as the biggest of these negaitve forcings although the biggest uncertainty is in cloud longevity.
Secondly is the ‘pipeline’ warming. You talk of feedbacks that have yet to act but the big factor here is a simple time lag – it takes time for the climate to reach equilibrium which is caused mainly due to the thermal inertia of the oceans. (Note however that the slowest process to reach equilibrium is the melting of ice caps & resulting sea level rise)
The sensitivity Monckton (does he actually describe himself as an expert!) talks of is actually the transient value, Trancient Climate Response being the usual defined quantity. (See the IPCC AR4 http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-5.html and the page following with estimates of the two ‘sensitivities’ plotted in Figure 10.25a)
Accounting for this second factor by converting equlibrium to TRC (1 * 0.6 shouldn’t be too controvercial) drops Monckton’s 100% (already reduced to 72-%13%) down to 43% – 8%.
Alternatively Monckton’s beloved equilibrium sensitivity can be approximated (rather than expressing his foolishness in terms of the forcing ratio Present:2xCO2 that he says is 100%). This converts Monkton’s upper limit for sensitivity sensitivity of 1.2 into the range 2.8 – 14.5. The central ‘best guess’ value (for negative anthropogenic forcing of 1.75) would be 4.4 which is pretty much what the IPPC say it is.
Link to Skiei et al – http://www.atmos-chem-phys-discuss.net/11/22545/2011/acpd-11-22545-2011.pdf
I have a dataset of biological growth data that I am attempting to fit a curve to.
This dataset has a varying growth rate with size.
Previous works (Bradwell and Armstrong, 2007) have fit polynomial fits to the data to estimate growth rate through size. These fits have not included error bars.
These are some examples:
I have made quite a few attempts at fitting the data I have but the polynomial fits do not achieve statistical significance and are difficult to estimate error bars from. The balance I need to find is between fitting the data and matching the physical characteristics of the growth patterns observed elsewhere. I have used the biological package “Grofit” in R which did fit with statistical significance but still does not explain much of the data R2 = 0.175.
Anyone have any ideas which preserve the two (statistical rigour/physical relationships)?
[Response: I’ve now looked at this data. Before I offer my opinion, I’m very curious to hear what readers have to say.]
You’ve got it about right. Monckton is ignoring that we have not even reached transient equilibrium for the current levels of CO2 much less the equilibrium or Charney sensitivity. Then there are the considerations of earth system sensitivity which takes into account the longer term feedbacks from melting ice sheets, methane releases, etc. etc.
RW, I looked at this, and frankly, I couldn’t see any sort of relationship at all. I would be fairly happy if my shotgun had such a pattern. The only way I could make sense of this with the types of polynomials you are suggesting would be if there were two separate populations here. However, I don’t see any sort of clean break.
I looked briefly at the data, and I think it would be difficult to justify a polynimial fit–indeed it’s hard to see much of any relation here, unless you have multiple populations. Even then, that would be a tough case to make. Frankly, it looks like a scattershot pattern.
I don’t see a relationship when I plot the 2nd column as a function of the first.
If these are lichen growth data as in your example chart, have you considered plotting the annual area growth as a function of the diameter? That is, add the diametral growth rate to the diameter, square that sum, and then subtract the diameter value squared. Plot the result as a function of the diameter.
It looks like a stronger relationship, but I don’t know how to deal with the increasing variance in the dependent variable as the diameter increases.
Whoops, that’s supposed to be a reply to rw.
Monckton thinks there are no feedbacks.
RW: LIke the others, I find no relationship between the “size” and “rate” data you’ve posted. Polynomial fits don’t give statistical significance, and the reason may well be that the relationship doesn’t exist. Regarding your statement that “package “Grofit” in R which did fit with statistical significance,” I’m skeptical of that statistical significance — because however I try, I find no evidence to support it.
Perhaps there is a relationship and perhaps you can demonstrate it with arguments from known biology and references to the literature. But you might have to accept the fact that you can’t prove it from these data.
I’m glad to have received the response that I have. I would not conclusively imply that the best result is the one I have attempted to achive. I recognize that this dataset may in fact not be able to be rectified with our current knowledge of lichen growth.
Bradwell and Armstrong (2007) (Jrnl of Quat. Science.)
“Growth rates of Rhizocarpon geographicum lichens: a review with new data from Iceland”
are among the first to develop this supposed growth rate change through size that is seen in the images I posted above. They produce a growth curve which is based on a polynomial fit to the datasets. Previous works have attempted other methods of lichenometry but many have attempted to use linear growth rates through time (Rogerson et al, 1986) (McCoy, 1983) (Evans et al, 1999) which have been used to effectively date surfaces. That being said Bradwell and Armstrong provide strong evidence that there is a change in rate through time. The data assembled and presented to you is from a very small number of lichens in a remote field area. However there are intricacies which need to be further examined. I cannot seem to be able to find a relationship which balances the physical relationships observed elsewhere (B,A 2007) with the dataset itself. I am no expert on statistics (as is clear) but I have attempted numerous methods to fit the data to no avail.
Snarkrates brought up an interesting point and one that might have validity in the dataset. I’ve been told that the data is collected from two sites which are in relatively close proximity to each other but that may have a significant impact on the distribution itself. Particularly because lichen growth is sensitive to local climatic conditions. I will have to track down the exact differences and which rates correspond to which area.
All of this being said. It was nice to have discussion and some insight into the data.
Tamino: I didn’t articulate it correctly, nor did I analyze in a way that is statistically accurate. I applied regression between the fitted rate values versus the actual rate values using grofit and polynomials. Thought I could figure out statistical significance that way.
Clearly the progression for this dataset should be
(A) re-investigate the origins and split into the apparent two sites
(B) re analyse individual sites
Do you have any advice on the best method to proceed to fit this data after I’ve split it?
The data was also fit using a quadratic fit with a log-normal distribution and monte carlo simulation was done for error bounds but quadratic, cubic fits really didn’t seem to match the physical reality either…
[Response: I tried several methods (some of them independent of the distribution and model) and couldn’t establish any relationship at all between size and growth rate. You should be aware of two things. First, the noise level is large. Second, the number of data points is small. This means it will be very hard to detect any relationship.
Also be aware that any given set of data contains a limited, finite amount of information. No amount or type of statistical analysis can create information that isn’t there.
Splitting the data into two subsections means each will have even *fewer* data points. Unless doing so produces a dramatic (almost miraculous) separation, it will make the situation statistically worse, not better.
My opinion at this moment is that there just isn’t enough information in the data you have to establish any relationship. It’s not that you need some fancier or more sensitive analysis to find it — it’s just not there.]
Particularly in including error bounds (advice that is)
U.S. records warmest March; more than 15,000 warm temperature records broken
rw — I’d be very cautious about combining data from different sites.
Are there any other values you can associate with each data point? If these are growth of lichens or lichen colonies, do you have any data on sunlight, soil nutrients, ambient temperature?
Tamino, you may be interested in this recent paper about a possible 50-80 year oscillation in the global temperature record (especially since they try to explain the “near-constant global mean temperatures in recent years”). The paper is behind a paywall, but the authors may be willing to send a copy (firstname.lastname@example.org):
“We examine an oscillation of global mean temperature with a period of about two thirds of a century. We find evidence for the oscillation both in the instrumental temperature record and in an Earth System Model millennium simulation without external forcing. There is also evidence for the oscillation in the Central England Temperature record, the longest instrumental record available. Our method is based on a discrete Fourier transform with varying starting point and length of time window. This method allows us to make a quantitative estimate of the contribution of an oscillation to global mean temperature, to track the phase evolution of the oscillation and to compare measurement and model results. The multidecadal oscillation could provide part of the explanation both for near-constant global mean temperatures in recent years despite warming by rising concentrations of greenhouse gases and for declining global mean temperature in the 1950s and 1960s alongside with the explanation of aerosol cooling. Quantitative estimates of the contribution of the oscillation to global mean temperature vary between ±(0.03–0.17) K. For the instrumental temperature record, our results indicate an amplitude of 0.03 K presently if the IPCC model average represents the effect of external forcings well, and (0.08–0.17) K when using simple linear and quadratic fits for detrending. For the millennium simulation, the amplitude of the oscillation is (0.05–0.06) K, but could be underestimated as compared to reality if external forcing acts to globally synchronize multidecadal variability. The role of the Atlantic Multidecadal Oscillation (AMO) in the model is discussed. The AMO has a spatial temperature distribution similar to earlier literature results and is more correlated with the global oscillation when external forcing is included.”
S. V. Henriksson, P. Räisänen, J. Silén and A. Laaksonen. Quasiperiodic climate variability with a period of 50–80 years: Fourier analysis of measurements and Earth System Model simulations.Climate Dynamics, DOI: 10.1007/s00382-012-1341-0.
Shorter: We applied a Fourier transform and saw a bump at aroung 50-80 years…sort of. And it has a magnitude of .03 K or maybe .05 K or maybe .17K…we don’t really know. And we have no freakin idea what drives it or why it has that period or if it’s global or local. But we really need a publication, so this is it.
General announcement: Probably a few of you are familiar with the StackExchange family of question and answer sites, and especially the new and excellent http://stats.stackexchange.com. Well, there’s a proposal for a new Climate Change stackexchange site at http://area51.stackexchange.com/proposals/31977/climate-change. This proposal has huge potential to be a really useful resource for answering common (and not-so-common) questions on climate science and related fields. I urge you all to go there an sign up and submit and vote on example questions (that’s basically how the scope of the future site is defined), so that we get the site up and running really soon. Once there are enough users committed, and high-quality questions (based on user votes) submitted, the site will go into a beta run, and we can start using it.
It’s possible that this has already been mentioned here. If so, sorry for spamming, but I think the site is definitely worth plugging.
[Response: …Splitting the data into two subsections means each will have even *fewer* data points. Unless doing so produces a dramatic (almost miraculous) separation, it will make the situation statistically worse, not better…]
I spent a fair amount of time re-examining the raw data and examining the origins of it. The two sites combined were in fact quite different altitudinally with one site being in a much harsher climate than the other. These factors are very important for growth data such as this. I decided to split the two datasets based upon this factor. One had significantly more data than the other. I also was able to have the opportunity to investigate the process for raw data acquisition and noticed a few discrepancies that needed to be fixed. These issues actually made more of a difference than I was expecting but nevertheless. The big change is that the data from the harsh climate site showed consistently less growth than at the site where most data was from. In combining the two the relationship that was expected based on previous work was not visible in the way I was expecting it to be, hence why I tried so many different methods to extract information from the data.
The new data:
When you look at the data it is easy to see that the removal of the harsh climate site data makes a world of a difference. This result is very in line with the physical reality I was expecting. I am just thinking of using a polynomial fit (2nd order) but I was wondering have you any advice on the best way to characterize the dataset now that the issues have been dealt with?
Woops the format of the data came out wrong. Here it is:
A quadratic (2nd-order polynomial) fit is now statistically significant. Cubic (3rd-order polynomial) is not as good, so I would stick with 2nd-order.
It looks as though the growth rate goes to zero as size goes to zero. In fact when I fit a 2nd-order polynomial and constrain it to have this property, the fit is better (as indicated by AIC). So, unless there is reason to believe that this model is incorrect, I’d do that instead of a straight 2nd-order polynomial fit.
Be very cautious about changing the data because they don’t fit your expected behavior. This is one of the most common ways researchers go wrong. When I see “noticed a few discrepancies that needed to be fixed” my statistical alarms start flashing red. Be absolutely sure you can objectively justify the data changes.
What Tamino said. I think you are certainly justified in splitting the data based on site–in fact, I would think this is essential. Any changes beyond that have to have good justification. It sounds as if you made the changes before reanalyzing the data–and that is good. Congrats on your data sleuthing.
This shows the two separate distributions. Site Mc was at a significantly lower altitude (400 m) as compared to Site Min (900 m). The significantly harsher climate at site Min as compared to site Mc makes it necessary to split into the two sites. It should never have been put together in the first place in my view. These two fits
Regarding changes made. Changes should never be made without the utmost justification. 3 points were added because they were not given to me (thought to be too small (less than 2 mm) ). Changes were made to two measurements that were meant to be the long-axis diameter but were actually not made across the longest axis like the rest. (IE where they measured in the first yr was not the same as the 2nd yr – apples to oranges). Luckily it was all done from photographs so there’s a perfectly transparent workflow and I have contacted the primary data collectors to inform them and make sure it was done correctly.
What do you suggest I do for error bars on the 2nd order polynomial fit. Zero growth at zero size I will add in. Testing with AIC, is that what you suggest to determine significance?
[Response: It sounds like the data changes you made were justifited.
AIC doesn’t test for statistical significance of a model, it compares the quality of different models. For significance, use whatever standard test statistic comes out of your computer program. This may be a “t”-statistic for the quadratic coefficient, or it may be an “F”-statistic for the model as a whole. If you use R, it will give you both.
R will also give you uncertainty levels (standard errors) for the coefficients of the model (as will most statistical software). I’m guessing that’s what you’re looking for, but it’s not entirely clear.]
Is it biologically reasonable that the growth in area of the lichen is related to its current size as measured by area? I did the transformation of your latest data as I described earlier to create an “Area Growth Index”.
Area Growth Index = (Diameter+DiameterGrowth)^2 – Diameter^2
If your lichen are truly round, then using radius and pi would give results in square mm.
The index was plotted as a function of the diameter squared:
It’s a 2nd order polynomial through the origin. r^2 = 0.87
Further question: Are these lichen growing on rocky or gravelly surfaces with different aspects? A few years ago I incorporated slope and aspect with a model of incoming solar radiation to create an index of solar radiation striking various surfaces. The idea was to see if it could be used as an independent variable in a seed germination and early seedling survival model. That might be a useful variable to consider on harsh growing sites.
The issue with lichens such as these, or lichens in general is that they tend not to be perfectly round. Rather they tend to be more oval shaped though often they are a bit irregular. We measured long-axis diameters. Therefore the area growth index is probably not the best way (mathematically) to approximate these lichens.
The lichens are growing on boulders on moraine surfaces and are therefore very dependent on local environmental conditions with respect to growth. Incorporating the local site conditions would be pretty difficult considering we do not know much about their winter conditions etc…
Okay, It looks like your recognition that the data came from two very different areas went a long way to clearing up some of the noise.
Thank you Tamino for your help with this.
I probably didn’t articulate myself the best in the statistical discussions. Nevertheless I am using R so the t statistic and the F statistic will be readily available to me. The issue I wasn’t sure about was I thought that the pvalue presented in R when I run the polynomial fit would indicate stat significance but I am incorrect in this case. With respect to the SE I just wasn’t sure if it was appropriate for polynomial fits as the measure that I will use as upper/lower bounds. Thank you for clearing this up.
Ahh yes I forgot about this issue. Using this relationship we are estimating age of the Lichens measured on surfaces. If I use the SE for the upper and lower bounds then growth is not constrained to be positive. So when I try to estimate age it makes it possible for negative growth rates which is physically untenable. Perhaps monte carlo simulation is a better method of estimating upper and lower. I am trying to incorporate uncertainty on the ages of lichens used with this method.
Not as bad as last year, but bad enough:
Tamino, John N-G has some interesting graphs on his latest blog entry. You’ll like it, I’m sure!
It’s going to annoy the Wattsians to no end, though…
Hey, this is cool–and seems right up Tamino’s alley:
(Statistically linking SLR and specific (melting) ice masses.)
We are not sure if we are breaching any policy by posting this. If so, we apologize!
We were hoping that you and the commenters reading this blog have something to contribute to a new web page we have just launched: warmingcheck.org
The purpose behind WarmingCheck.org is to collect good arguments regarding global warming, and let the public be able to compare arguments against each other.
By using that approach we believe people will start to see the science behind each side, and base their opinion/belief on the science and not what the media tells them.
We don’t want to influence the public with our own arguments, since it can be regarded as influencing one view.
We therefore hope you would have the time to an argument (or more) to one of the questions.
If you don’t have the time or energy for this, we thank you for reading this and hope you might be able to forward it to someone who you think might have something to contribute.
In science we trust,