A reader recently inquired about using the Theil-Sen slope to estimate trends in temperature data, rather than the more usual least-squares regression. The Theil-Sen estimator is a non-parametric method to estimate a slope (perhaps more properly, a “distribution-free” method) which is robust, i.e., it is resistant to the presence of *outliers* (extremely variant data values) which can wreak havoc with least-squares regression. It also doesn’t rely on the noise following the normal distribution, it’s truly a *distribution-free* method. Even when the data are normally distributed and outliers are absent, it’s still competitive with least-squares regression.

So — why not use Theil-Sen?

There is one catch. The noise in temperature data tends to exhibit *autocorrelation*, that nearby (in time) values are correlated with each other. We know how to compensate for this with least-squares regression, but I don’t know how it affects the Theil-Sen estimator. Probably somebody does — I’d be surprised if nobody has studied this question — but I haven’t seen their work.

Even so, my intuition is that autocorrelation will have about the same affect on the Theil-Sen estimator as on the least-squares estimator, and could be compensated in the same way. So, I decided to run some tests on artificial data in order to investigate the issue.

First, let’s show just how good the Theil-Sen estimator is when there’s no autocorrelation. I generated 500 artificial time series of random noise following the normal distribution (*without* autocorrelation), then estimated the slope and the uncertainty in the slope by both least-squares (LS) and Theil-Sen (TS). Of course the “real” slope is zero because there’s no signal, just noise. Here’s the result:

Both methods gave almost the same result in every case. The standard deviation of the LS estimates was 0.0034, that of the TS estimates was 0.0035. The average estimated standard error for LS was 0.0034, for TS was 0.0035. Clearly, both methods work excellently and give nearly identical results.

We can even look at the confidence limits for both methods. For TS, the confidence limits aren’t computed using the standard error, but using the distribution of the estimated slopes of each pair of data points. Here are the lower 95% confidence limits by both methods:

Here are the upper 95% confidence limits by both methods:

Again, both methods do an excellent job and they give nearly identical results. One would be hard pressed to give a compelling reason not to use Theil-Sen.

But what about autocorrelated noise? I created another 500 artificial time series following an ARMA(1,1) process, with its parameters set to mimic the noise in monthly global temperature data. Then I estimated the slope and its standard error by both methods, but did *not* compensate the LS estimate for autocorrelation. Here’s the result:

The standard deviation of the LS estimates was 0.0088, but the average standard error was only 0.0040. This is because the standard error estimates are wrong — they’re not compensated for autocorrelation.

The standard deviation of the TS estimates was 0.0088, but the average standard error was only 0.0041. Again, the standard error estimates are wrong because they’re not compensated for autocorrelation. The interesting point is that the autocorrelation seems to have have the same effect on the TS slopes that it had on the LS slopes.

We can also look at the 95% lower and upper confidence limits, without any autocorrelation correction:

Again, the results are nearly identical. More to the point, *if we apply the same correction to the TS slope as to the LS slope, we’ll get correct results*. I conclude that there’s no good reason not to use TS for temperature data, but we must still correct for autocorrelation, and we do so in the same way as for LS.

Just for fun, I applied both LS and TS to annual average global temperature data since 1975 from NASA GISS. I used annual averages because they show much less autocorrelation than monthly averages (it’s much less, but it’s still there!), and I did *not* apply an autocorrelation correction. Here’s the result:

The estimates, and the estimated uncertainties, are indistinguishable. I also computed, by both methods, the slope for data from various start years through 2012:

Again the results are pretty much indistinguishable. It should be noted that the error bars are too small, because there is still some residual autocorrelation which is not compensated — but with annual averages, the uncompensated estimates at least get us in the ballpark.

Computing the slope from 1975 to various end years also gives similar results:

Finally, I computed the slopes for all 15-year time spans from 1975 to the present:

Once again the two methods give the same answers.

All this reinforces the conclusion that there’s no reason not to use Theil-Sen for trend estimation of temperature data. There’s also no reason not to use least squares. Both methods require correction for autocorrelation, and both methods give reliable results.

Speaking of trends, another unfortunate data point with Australia’s summer being declared the hottest ever… in an El-Nino neutral year. Sigh. http://www.abc.net.au/news/2013-03-01/australia-experiences-hottest-summer-on-record/4547746

Small strategy nit: Pointing out an area with an abnormally hot temp leaves you wide open to someone else pointing out the other areas of the world which must necessarily be be colder in order for that to be true. Over any arbitrarily short time period where AGW can be neglected, excessive heat in one area must be balanced by corresponding cooling somewhere or somewheres else…if we are to believe the 2nd law that is.

I see this most every time a polar high blasts down the Midwest Plains and all the deniers point out the “unseasonable cold and associated snow in the south end of the system means global warming is over”…Weeelll, as that is happening on the plains and areas South, it typically gets unseasonably warm where I live. Complex but basic physics: The subtropical warm air masses, and lows :-( , get displaced up along the sides of those cold highs on the jet stream and get channeled right up to me here in Newfoundland. Temps go up, usually a lot of rain and fog too. Snow goes away.

Anyway, better strategy is to stick with the world IMO noting regional effects like an excessively hot summer as _side_ effects of warming, not a main effect. Am sorry for all the floods and fires you\re having though.

Oh should have mentioned…it’s happening right now. A bit warmer here than north TX and much warmer than OK.

> “Pointing out an area with an abnormally hot [...]”

Hence the words “another unfortunate data point”.

Not trying to be too much of a troublemaker…the point is there physically MUST be a corresponding datapoint or points in the other direction over any short period of time. Essentially it’s weather not climate. It belongs in the now-we’re-seeing-more-extreme-weather-both-ways category, not the warming climate category to my mind.

I’ll put away my nit comb now.

John. The problem with your nit is that it is poorly worded or you are over generalising from your own experience. We know that 90% of the warming is going into the oceans – so it is possible in theory for land surface temperatures to set records everywhere without a corresponding cooling if the source of that heat is the warming ocean.

Australia’s hot summer is highly significant as the following article by two Australian climatologists explains.

http://theconversation.edu.au/hot-summer-yes-the-hottest-12505

They also note that “December 2012 was the hottest December on record for Southern Hemisphere land areas, and January 2013 was the hottest January. Australia was a large contributor to this, but so were southern South America and southern Africa.”

(Apologies for being off-topic)

MikeH — Thank you for the informative link.

It is not poorly worded to say that _over any arbitrarily short period of time_, warming can be neglected as it is occurring in a far distant decimal place. In such cases a particular hot place is weather which is balanced by weather elsewhere. It is not climate. Over a short period, heat is only shifted around. It’s the law! (2nd)

We all see this mistake when the deniers make the argument that when it snows somewhere southerly (in the NH) global warming is not occurring. It works both ways…That’s what makes it wrong.

That’s all.

Yes, I second DBB’s thanks.

El Niño year? BOM says neutral. http://www.bom.gov.au/climate/enso/

Conditions may have barely slipped into El Niño briefly. I think Australian temps have been influenced by specific nearby seat surface temperatures. I don’t know why these SSTs have been and are (see link) on the warm side.

> “El Niño year? BOM says neutral”

True – I had said that: “in an El-Nino neutral year.” Not really clear enough on my part – there would have been better ways to express it.

“Seat surface” temps? It’s the pants of all those AGW deniers catching fire.

Tamino, any comment on this post at The Blackboard.

Bit off topic (apologies) but it is related to trend estimation, if a bit over-excited.

http://rankexploits.com/musings/2013/skss-online-tool-to-disappear-global-warming/

[

Response:Thanks to Lucia -- mistress of the idiot's approach to data analysis.]Yeah, that is more or less what I thought. She did herself no favours.

You might consider a future post about the methodology she was “criticising”.

You want to know something funny?

The reason I included the option to change the autocorrelation analysis period was a post by Lucia a few years back when she suggested that a period with few volcanoes would be a better basis. There is some substance in her claim, however she would also need to establish that other factors that have changed – latitude and land/ocean sampling, scale and variation of human influences, mean climate – have not done more to influence the level of autocorrelation. There’s an interesting but minor paper in there.

Lucia makes fun of the trend calculator as ineffective advocacy. She fails to understand that it’s not advocacy, it’s just a tool – an enabling technology. Of course I suspect that is her real beef with it – by enabling people to test things for themselves it challenges her power as a priestess of contrarian statistics.

What’s the difference between Theil-Sen and the Mann-Kendall test with a Sen slope estimate? The MKS method is very popular in Hydrology but there had been a bit of a controversy in the field on how to apply in when there is autocorrelation present.

Some argue that correcting for autocorrelation artificially reduces the true slope and that therefore the correction should not be applied. I personally feel that this is a misunderstanding where arguments about appropriate correction techniques are being interpreted as a regection of the entire concept.

Yue et al 2002 (http://onlinelibrary.wiley.com/doi/10.1002/hyp.1095/full) is usually credited with starting this whole debate.

What package are you using to implement Theil-Sen? R “zyp”? mblm?

[

Response:I wrote my own R script.]For anyone interested there is an excellent quick intro to Theil-Sen regression in Sections 27.7 thru 27.10 in a book I recently bought titled:

Understanding Statistics Basic Theory and Practice by Grant Foster.

There looks to be enough there John Garland to cook up a version of it. I will give it a try in MATLAB when I have the opportunity.

OT

Tamino, would this make the trend value even higher than F & R or in line with your conclusions ?

http://www.sciencedaily.com/releases/2013/03/130301123048.htm

.

Interesting post, thanks. I wasn’t aware of Theil-Sen. Your points are well taken, and if anything it appears that LS still has a bit of an advantage at short series lengths, showing noticeably tighter CI’s.

I just spent some time trying to see if I could come up with a better estimator than LS under various levels of ac (i.e., AR(1) coeffs. from 0 to 1) mainly just as a self-education exercise without having to wade through the lit. By “better” I mean one with a lower variance. I also looked at a few data diagnostics that might indicate under just what conditions LS gives a poor estimate under high ac., in the hope that it might be possible to correct LS-based estimates individually (rather than just correcting the estimate of the standard error using Lund or Quenouille or similar methods).

I did find a couple of estimators that were better, but only under high ac levels (~ AR(1) coeff > 0.8), which isn’t very helpful for most data situations. At lower AR(1) values, the LS estimator was always better. I also found a pair of diagnostics whose joint probability had about equal (but not better) probabilities of occurrence as the LS slope estimate, at lower AR(1) values (~ >= 0.4).

Conclusion: whoever (Gauss?) developed the LS estimator did a pretty darn good job for most applications.

[

Response:Gauss was a pretty smart guy!Also, LS is the maximum-likelihood estimator when the noise is Gaussian (which it usually is, at least to a good approximation. That makes LS a consistent and efficient estimator of the slope. For Gaussian white noise it's actually BLUE ("Best Linear Unbiased Estimator"). The fact is, LS is damn good, can't be beat with Gaussian white noise, and is hard to beat in most circumstances.

For strong autocorrelation the "best" method would be generalized least squares (GLS). But it's harder to implement, and in most cases I'd say hardly worth the trouble. Note that for the sample data in this post, the LS and TS results are almost indistinguishable.]> Gauss was a pretty smart guy!

Yep. And not an economist ;-)

BTW great novel: Measuring the World

“not an economist”

Except when it came to “squares”, with which Gauss was most economical.

Must have been a shortage back in his day.

“Economist of Squares”

– by Horatio Algeranon

Gauss was not an economist

Though economic with “squares”

With which he was a minimalist

Though no one really cares

“Dr Sass […] maintained that in paradise, until the time of the fall, the whole world was flat, the back-curtain of the Lord, and that it was the devil who invented a third dimension. Thus are the words ‘straight’, ‘square’, and ‘flat’ the words of noblemen, but the apple was an orb, and the sin of our first parents, the attempt at getting around God. I myself much prefer the art of painting to sculpture”

― Karen Blixen, “Seven Gothic Tales”

BTW Gauss was also extremely economical in publishing his findings: his motto was

pauca sed matura, few, but ripe. Perhaps that’s why a full description of the method was first given in the literature by Adrien-Marie Legendre…Slightly off topic, does any one here as some comment about the work of Shaun Lovejoy of McGill.

Thankyou Tamino. Nothing like a 2nd opinion to make you look at your work more closely. I found an error (apologies) in my calc of prev comment as a result of your reply. I now get LMS and TS giving very similar slopes as do you. I had a suspicion autocorrelation would cause a problem. Thanks for an interesting article.

Is Theil-Sen implemented in a smoothing filter?

Presumably, it’s comparable to LOWESS, but better for smoothing data that contains impulse noise.

Perhaps this is wrong, but If one knows something about the duration of the noise, it seems one could actually do a version of Theil-Sen with less than the full set of “all pairs of data points” and not really lose anything in the way of precision.

For example, if one knows that noise lasts only a few years, it seems that one could eliminate the pairs of points from the Theil-Sen analysis that are separated by less than 3 years (since these slopes would be dominated by the noise). Perhaps this might have the added advantage of reducing the impact of auto-correlation?

Also, for the case of a “trend + random noise” when the number of points is pretty large, it seems that the slope between each pair of points might be ‘decomposed” into a part due to the trend and a part due to the noise and that for the slopes taken collectively, the part due to the noise would vary randomly about the part due to the trend (meaning that the middle value would be the value of the LS squares trend, or the two methods would give essentially the same result)

Then again, perhaps that is all wrong (wouldn’t be the first time and probably won’t be the last)

There is an R package that uses this method. I’ve played around with it. Worth a look package is called “zyp”