Not long ago Willis Eschenbach did a post at WUWT about USCRN, the U.S. Climate Reference Network. It’s the best collection of quality-controlled, properly sited, state-of-the-art instrumentation for weather data in the U.S. Its only drawback is that it hasn’t been in operation very long. If we want to form a nationwide temperature average (for the “lower 48” states), we can only do so using USCRN since 2005. With just a little more than 12 years’ data, that’s not nearly long enough to get a useful estimate of what the trend is.

Here’s that USCRN data, from January 2005 through September of 2017, and I’ve added (in red) a trend line estimated by linear regression:

The estimated trend is upward, but its uncertainty is large because the time covered is so short. I estimate the warming rate at 10 ± 14 °F/century (95% confidence limits). Because the uncertainty is so large, we say that the claim the trend is upward fails to reach “statistical significance.”

If this was the only data we had, we would know that the trend was highly uncertain; it could be as low as *cooling* at 4 °F/century, but it could be warming at a whopping 24 °F/century.

But we do have more data for the “lower 48” states of the U.S., from the National Climate Data Center, all the way from 1895 to the present. Here it is:

I’ve added an estimated trend which is not a straight line, because the trend over this entire time span is not linear.

It is at least approximately linear since 1975, which enables us to estimate the recent trend rate. Here’s the data since 1975 with a trend line according to linear regression:

The estimated trend is upward, and this time it *is* statistically significant at 5.5 ± 2.2 °F/century. We can plot the estimated rates together with their uncertainties:

This makes it abundantly clear that ignoring the trend from NCDC data, mentioning only the USCRN data, helps our understanding *not at all*. It only serves one purpose: to muddy the waters.

The upshot is that, as good as the USCRN data are, they don’t cover a long enough time span to give us any useful information about the trend. Let’s face facts: “between -4 and +24 °F/century” tells us just about nothing we didn’t already know. If you were to tout the trend estimate from USCRN data alone, to draw any conclusion or even to *imply* any conclusion, you’d be wrong.

Yet Eschenbach’s post declares (in its title no less) “*no significant warming in the USA in 12 years*.” It’s a follow-up to an earlier post by Anthony Watts which also declared “no significant warming,” based at that time on less than 10 years’ data. Watts even declares that USCRN data shows “the pause.” Based on less than 10 years’ data. Not long after, he did essentially the same thing again.

If someone I knew nothing about posted that on his blog, I might regard it as “good-faith dissent.” Horribly misguided, amazingly ignorant, but I like to give the benefit of the doubt so I’d assume good faith. After all, most people aren’t statisticians, don’t know how to compute probable error ranges for trend estimates (let alone trend estimates themselves), and aren’t fully aware of just how completely, totally, utterly, astoundingly **meaningless** and **misleading** are conclusions based on trend estimates using such a way-too-short time span. You can’t blame people for simple ignorance, especially on scientific topics (like statistics) that take years to get good at.

But when it come to Anthony Watts and Willis Eschenbach, the “just ignorance” defense doesn’t hold water. Watts has been blogging about climate change for over 10 years (longer than the USCRN data he used to declare the “pause”!). He has often discussed, and has hosted many posts about, temperature trends. Some of them have explored some of the subtleties involved.

When it comes to *knowing* that trends based on such short time spans are useless and misleading, Anthony Watts has been told. Many times. Many, many times. Many, many, many, …, many times. By me, by his critics, even by his own readers and supporters. **There is no excuse** for him not to know this. Willis Eschenbach too.

There’s a legal concept called “culpable negligence.” We’re all susceptible to accidents, you might even shoot and kill someone by accident. Ordinarily the law doesn’t punish people for accidents, and this is as it should be. But sometimes, the accident happens because someone was so negligent, so oblivious to what he *should have* known, that it goes beyond mere accident. It’s *culpable negligence*.

To ignore what he has been told so often, what he has himself acknowledged, and *continue* to both write and host posts exploiting readers’ ignorance by flouting useless, meaningless “trend” estimates, he has stepped far, far beyond simple ignorance. As far as I can see, there are only two possibilities. 1) He is deliberately misleading his readers. 2) He is *culpably* and *willfully* ignorant.

I don’t blame people for ignorance. I do blame them for repeatedly ignoring the truth after it has been pointed out so many times.

My opinion: Anthony Watts has gone beyond “skepticism” to “denial.” That’s why I call him a climate denier. Eschenbach too; he actually uses the phrase, “*Many people, including scientists who should know better …*” Willis, *you* should know better.

This blog is made possible by readers like you; join others by donating at My Wee Dragon.

I agree. However, many of your points could also apply to Mr Watts’s readers, who should also know better but they lap this up because it conforms to their wishes that humans are having minimal, or no, effect on the environment which sustains us all. I seriously doubt that Watts or Eschenbach are convincing anyone of anything. The readers at that blog go there because they feel comfortable with others who refuse to accept reality.

Up to 17 years ago, that blog would have been a regular haunt of mine (if it existed) because I was also a fervent climate change denier. It was only when a friend convinced me to check the science that I realised how stupid I’d been. I’m not sure all deniers would be able to do that and I think it will be hard to get them to do what I did (partly because they will rather think that the pseudo science peddled at blogs like Watts’s, is valid).

The other way of doing this is calculating Bayes posteriors on the slopes and using the posterior for the NCDC data as a prior for the USCRN estimate, and producing a posterior for it. And, it’s possible to cheat in the sense that if the NCDC was used to estimate the post slope estimate with variance using maximum likelihood or something, you can take that as a prior for the USCRN run with a Bayesian calculation. Only thing is if NCDC is used with MLE, I’d do something about inflating the resulting variance in forming a prior, since MLE tends to bias variance estimates low, as is well known in frequentist circles these days (“shrinkage estimators”).

The resulting counter to Watts & co. is that even if the NCDC isn’t as good as the USCRN, it’s not completely worthless. Pretending that one needs to start over from scratch is ignorant. There’s information there. Probably information derived from a problematic network, but it isn’t

bupcus.If we assume that the two data sets actually measure the same rate, they could be combined using their inverse variances as weights. You don’t need to even do the numbers to see that the USCRN data won’t add anything worth bothering with to the mean rate of the NCDC data. No need for any approach more complex.

[

Response:I regard the USCRN as a valuable resource, especially because it has such good quality control. Just FYI, during their period of overlap it indicates a slightly faster U.S. warming rate than the NCDC data.What I object to is implying ridiculous conclusions based on too little data, especially when one does, or should, know better.]Tamino,

Yes, the USCRN trend may not yet be significant, but the divergence vs ClimDiv seems to be highly significant (autocorrelation ignored), based on the 12 complete years we have so far:

A diverging trend of 0.12 C/decade is quite a lot, and I suppose it wouldn’t go on for ever. Do we have pristine rural heat islands or urban cool islands in the USCRN-era? Which phenomenon is at work here? We can at least not blame the Pairwise Homogenization Algorithm (PHA) for creating spurious warming, on the contrary..

It is frequently said that approximately 17 years are needed for Global temperature trends to rise significantly over the noise. Since the USA is smaller, I presume that a longer time is usually needed for the trend to rise above the noise.

If you look at the NCDC data, what is the typical average time needed for the trend to rise above the noise?

[

Response:Interesting question; I’ll look into that.]I have code which can easily be adapted to generate a power curve at various series lengths but when I go to the climate link you list for the NCDC data I get a 404 Not Found error. Nor do I yet see the long term series data on the more general site. Anyone know where it exists now? If it still does exist.

“It is frequently said that approximately 17 years are needed for Global temperature trends to rise significantly over the noise”.

No, only some people say that, many more people dispute that even 20 years is enough. 30 or 60 years would be better.

“Since the USA is smaller, I presume that a longer time is usually needed for the trend to rise above the noise.”

Smaller sample size but same interest and time frame. Surely the trend should still be there, and similar noise problem? Does the noise mute with a larger sample or stay the same?

See Dikranmarsupial’s paper on the statistics linked in his comment below.

“This is why climatologists tend to use a long period of about 30 years to assess trends”

.https://skepticalscience.com/statisticalsignificance.html

USCRN is quite useful as a check on the accuracy of the broader network (and the efficacy of homogenization on problematic weather station data), since it should be free of any potential biases. We published a paper in GRL looking at exactly this last year: http://onlinelibrary.wiley.com/doi/10.1002/2015GL067640/abstract

I would like to read the paper Zeke, but not for $38.

@Tom Passin,

Yeah, except that, in doing so, you are effectively assuming the two series to be independent, and they are not. In particular, if the USCRN had a certain negative correlation with the NCDC series, it could

reducethe combined variance, in particular if the magnitude of covariance was bigger than its own variance. This is purely hypothetical. I have no idea what it’s correlation with the NCDC is. It could, as you suggest, inflate it , too.It’s a bit late on a Sunday evening to look it up, but a very recent excellent post at WUWT showed conclusively that the temperature record is indistinguishable from a random walk. And knowing how few weather stations there are in the world, and the fact that the average of two temperature reading per day is used, to a zero decimal place(!) the global average temperature is absolutely useless, and completely unsuitable as a basis for political spending programs in the 100s of billions.

[

Response:No, it’s not a random walk. No, the limited precision of individual measurements doesn’t invalidate greater precision of the average. Those are old memes that should be dead; the refusal of deniers to admit it is another thing that makes them deniers. As for “political spending,” open your eyes to the Koch brothers.]Categorizing temperature series as a “random walk” is a deliberate ploy to mislead the public. It cannot be anything else. A step-change Kalman filter may model a series as a random walk, because it wants to make

a prioricommitment as weak as possible to any structure. But after it’s calibrated, that is, after the coefficients at each step are found, or, in other words, the coefficients are calculated conditioned on the data, it is no longer a random walk, and calling it such is a misrepresentation (or a misunderstanding, per Heinlein’s Razor).If it were a “random walk” there would be no trend. And there is a trend. And the bare bones Kalman (or other: I haven’t looked at what WUWT are doing, and I won’t) model cannot provide interpretation. It’s free of semantics. That has to come from Physics.

There are many different flavors of random walk. If the WUWT’s are referring to a pure Brownian motion random walk, the ultimate excursion is unbounded. What that means is that on a long enough measured interval, the excursion from the mean can be just about any value. This is well known as the “gambler’s ruin” problem.

But much of real physics is bounded, and that’s why you find instead the Ornstein-Uhlenbeck random walk, which will always revert to the mean. Predictably, the ordinary WUWT fan would ignore this version and prefer the unbounded random walk to better match their preconceived notions.

They would also avoid understanding that much of the natural variations observed are not random or chaotic at all and that a variation such as ENSO is actually a bounded oscillation forced by the tidal signal, which obviously is bounded in gravitational strength.

The only aspect that is not bounded is the growth of CO2 in the atmosphere, which Watts and Willis are somehow driven to deny.

I am aware that there are various flavors, including Levy walks, for instance. I bet WUWT did not distinguish. The point isn’t that there are flavors, the point is they intend to mislead.

” a very recent excellent post at WUWT showed conclusively that the temperature record is indistinguishable from a random walk”

A random walk means that the temperature could climb without end, or drop without end, even below 0K. Does that sound realistic to you? Or does it sound like it contradicts the laws of physics?

It may be the case that, over a short period, we cannot mathematically distinguish between a random walk and the actual temperature. But if we know that the random walk is non-physical, it’s still safe to discard it as a hypothesis. Heck, it’s required.

The folks who claim temperature is a random walk are certainly capable of doing sophisticated calculations–but they seem utterly incapable of understanding the meaning of their calculations. It’s a perfect example of what I like to call, “stupidity sent to college”.

[Response: … As for “political spending,” open your eyes to the Koch brothers.] Open your eyes to the spending by the world’s governments. And there is a big difference between spending by a private organization and the spending of public money extorted from tax payers.

As for the Koch Brothers, we have yet to see a list of who get’s their money and what for.

Read Jane Mayers’s “Dark Money”, which documents large expenditures over decades. You’re right, of course, that we haven’t seen a complete list; the Kochs have done everything in their power to obfuscate their spending, including changing the reporting rules via their political influence.

I’m rather bemused by the claim that public money is “extorted”. Speak for yourself, brother. I pay my taxes with full recognition of the benefits I derive from them–though, admittedly, that list of benefits seems to be shrinking daily under the gently ministrations of a Koch-owned US federal government.

Thank you, Ben for your absolutely ludicrous opinion. Say “Hi” to the lizard people for me.

Ben Palmer:

Good idea. Give us your best estimate of how much the world’s governments spend on climate science research. Be sure to cite your sources.

Maybe

youhaven’t, butwehave. Robert Brulle published a peer-reviewed analysis of the influence of private money on public attitudes regarding AGW, Institutionalizing delay: foundation funding and the creation of US climate change counter-movement organizations, in 2014:Of course, there’s no documentary evidence I’m aware of that the “Koch Club”, namely the individuals, families and corporations with the most to lose by decarbonization, have violated any

lawsagainst public deception. Why would they, when they’ve successfully shaped ‘free speech’ and privacy laws to suit themselves? Regardless, one assumes they employ professionals to ensure no such evidence is found.OTOH, although they needn’t fear legal sanction, plutocrats who’ve invested in flooding the public sphere with political and scientific disinformation for profit are, unsurprisingly, wary of negative publicity. Recent favorable SCOTUS interpretations of ‘free speech’ and privacy laws have allowed sponsors of AGW-denial propaganda to keep still more of their activities out of the public record. Brulle reports, for example, that in 2008 the Koch Affiliated Foundations began making their ‘stink-tank’ donations through the Donor’s Trust/Donor’s Capital Fund, making it impossible to trace their recipients.

IOW, the details one suspects you’re asking for are almost certainly unavailable. Do you think absence of evidence is evidence of absence in this case? Should you assume the Koch Club abandoned its AGW-denial long game after 2010?

John Francis:

‘Conclusively’, huh? Your comment conclusively demonstrates that 1) you overestimate your own competence to evaluate climate science rigorously, and 2) you fail to distinguish genuine from fake expertise. That may indicate the Dunning-Kruger effect, though perhaps not conclusively.

John Francis mentions the hundreds of billions. Let’s assume for the sake of discussion that the number would be close to reality. The recent events in Houston and Puerto Rico caused damage numbering around 190 billions. That’s only 2 events. The financial crisis of 2008, that had no other roots than greed and criminal behavior, cost around 15 trillions worldwide, net loss. I see a grotesque double standard on what is wise investment.

Climate revisionism is worse than ‘culpable negligence’.

“1) He is deliberately misleading his readers.”

Indeed, but worse: he is actively, consciously pushing a fossil fuel agenda (or rather: the Ideology of Plunder).

“2) He is culpably and willfully ignorant.” – he is in fact not ‘ignorant’ at all and herein lies the true crime.

Watts et al was never ‘skepticism’. You among so many fell for that trap and I can see you still are falling for it.

Call for prosecution.

[

Response:No, I’m not interested in prosecuting anybody. Do you fight to punish, or do you fight to change things? I want to change their minds, not punish them for weaknesses we all possess.]I often go back to an old post by Bob Grumbine:

http://moregrumbinescience.blogspot.ca/2009/01/results-on-deciding-trends.html

The length of time it takes to see a significant trend is a property of the

data, not of the method.Except that Grumbine’s rule for calculating the variance is bogus in the case of dependent data:

There are a number of ways of adjusting for serial correlation, and these need to be applied to the

standard error of theas well as to variance estimates. In these days of cheap computational tools, and if I want to pretend to be a Frequentist, I’ll often use the Politis and Romanomeanstationary bootstrap. But Ramsey & Schafer (2002,The Statistical Sleuth) give, under the assumption of an AR(1) model, thestandard error of the meanof a series with serial correlation as . Other formulas pop up under different assumptions.It is frequently said that approximately 17 years are needed for Global temperature trends to rise significantly over the noiseSkeptics often cite Ben Santer’s paper on this. This is from the abstract.

“Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.”You’ll also find in the conclusion of that paper:

The clear message from our signal-to-noise analysis is that multi-decadal records are required for identifying human effects on tropospheric temperature. Minimal warming over a single decade does not disprove the existence of a slowly-evolving anthropogenic warming signal.http://onlinelibrary.wiley.com/doi/10.1029/2011JD016263/full

There’s more to the paper than the abstract.

At WUWT, under the title, “RSS Reaches Santer’s 17 Years,” the abstract sentence is quoted, but not the conclusions.

A link is provided there to the abstract of the paper, despite it being open access, and the full version appears at Wily by clicking on ‘continue reading.’ How many WATTians would have followed through?

WUWT isn’t interested in the science, just the soundbytes.

Different data streams require different times to see real changes. The question is what is the signal to noise ratio. More noise requires a longer time to determine the signal. It does not take 17 years to see that CO2 is rising in the Keeling curve because the noise is much smaller than the signal..

Since RSS measures tropospheric temperatures measured by satalliite,, not surface temperatures that Santer analyzed, a different time period is required for the signal to rise over the noise. Since the RSS noise is so large a longer time period is needed.

Smaller observed areas (USA) generally have more noise than larger ones (global). The signal is also different globally than in the USA alone.

As you said “WUWT isn’t interested in the science,”

I am sorry, Santer analyzes RSS data.

They argue that Sander does in fact set an absolute bar at 17 years, and that all models have been falsified. It’s in an e-book; unsullied by evil peer review.

“Our results show that temperature records of[at least17 years in length are required for identifying human effects on global-mean tropospheric temperature.”emphasis mine]The

“at least”is important, it is the minimum at which you might start to expect to be able to identify human effects on GMSTs, not the point at which the models are falsified if you don’t see one.Obviously, but not to ristvan.

While the Heartland Institute continues to seek “independent funding” for Watts sidekicks, WUWT’s sustained invocation of thermodynamics in denying the basic physics of radiative equilibrium has been worn thin by a decade of repetition, forcing Eschenbach to concede in his most recent post that there my be something to this Second Law business after all.

Sadly a lot of people use statistical hypothesis testing without understanding the framework within which it operates. The obvious question for Watts and Willis is “what is the statistical power of your test?”. If the statistical power is low, then it is unsurprising that there is a non-significant outcome. A lack of statistically significant warming does not imply statistical evidence for a lack of warming.

Unfortunately the statistical test is easy, but the estimation of statistical power much less so, which is why it is so rarely considered. This is most important if you want to argue in favour of the null hypothesis, so if you don’t understand statistical power, at least set up the null hypothesis as the hypothesis you are arguing against (in this case if you are arguing there has been a hiatus, then your null should be that there hasn’t, i.e. warming has continued at a constant rate).

Here is my attempt at explaining the problem:

https://skepticalscience.com/statisticalsignificance.html

Of course this has been explained to the “skeptics” repeatedly.

Another, very specific way of looking at power is to take the observed distribution and model the percentage of times a randomly generated distribution WHICH CONTAINS THE TREND will be identified through regression as actually exhibiting a statistically significant trend.

Using annualized 1975-2016 data from ClimDiv here

https://www.ncdc.noaa.gov/temp-and-precip/national-e re-index/time-series?datasets%5B%5D=climdiv¶meter=anom-tavg&time_scale=12mo&begyear=1975&endyear=2017&month=12

which I think is the same data Tamino used at the monthly level one finds an annual trend of ..053F with a residual error of .82 s.d.’s. That is, the residual error is over 15x the magnitude of the trend. (Annualized rather than monthly data is used to reduce autocorrelation effects.)

Drawing from random distributions with these parameters, one finds a significant trend about 50% of the time if there are 20/21 years of data. By 30 years the trend is seen as significant over 90% of the time. On the other end of the distribution, at 10 years the likelihood of seeing the trend as significant–which remember IS DEFINED INTO the data–is less than 15%.

I don’t know how to insert an image here, but a completed graph showing the percent of trends identified as significant in series of lengths from 5 years to 35 is available at http://www.nfgarland.ca/ncdc.pdf . The graph is based upon generating and running a regression on 10^5 distributions at each series length.

@JGNFLD,

That’s nice work.

I think you can put a link to the figure just on a bare line and it knows what to do. I just tested that on my WordPress blog, and it works.

I carried out a Bayesian analysis (JAGS) of the NCDC data for 1975-2017 and then used the slope RV from this as the prior for the 2003-2017 new data to see how it would update the slope estimate.

The slope of the NCDC data was mean of 5.54F with 95%CI(+/- 1.42F). It was normally distributed using the Shapiro-Wilks test.

Using this as the prior for the slope it was then updated with the USCRN data. The slope of the temperature series was then a mean of 5.68F with a 95%CI(+/- 1.41F). It was also normally distributed using the SW test.

In conclusion, the iterative Bayesian analysis did increase the mean by about 2.5% but had very little impact on the variance. Overall, my conclusion would be that it doesn’t add a lot of information to what was known already.

Joe

Btw meant to mention I did this analysis based on Hypergeometric’s suggestion above.

Terrific.

To set a little context for this Bayesian updating, I decided to redo the updated estimate with a fictional USCRN series 2005-2017. The fictional series had a slope of 15F/century and a much lower variance with a SD of 0.4F rather than the nearly 2F in the de-trended actual series.

The new updated estimate of the slope since 1975, using this fictional series of nearly 12 years duration, was a mean of 8.43F and a 95%CI of +/- 1.27F.

So, 50% increase in the trend rate in the new series had a significant impact on the mean but a reduction of the SD by 80% only had a modest reduction in the slope estimate variance.

By the way, the non-informative Bayesian slope estimate of the USCRN real data series (i.e. not conditioned by prior information from NCDC) was a mean of 10.22F and a 95%CI of +/-8.57F which would be statistically significant. In fact, the probability of the slope being less than or equal to zero was estimated at 0.01. I ran it a few times and the values of the mean or SD didn’t vary by very much (the prob <=0 hardly at all). Interesting that the variance in the Bayesian estimate is much lower at 8.57F compared to the estimate of 14F by frequentist methods. I'm not sure why this is.

[

Response:Did you account for autocorrelation in your Bayesian estimate? It’s not necessarily easy.]Tamino, no the model shown there had no auto-correlation element in it. It was a straightforward model of the type y=a+b*x+err type model that was used in change point analysis of the temperature record by Cahill et al a few years back. I note that the autocorr in the USCRN data is relatively low at 0.236 at lag1 (slightly higher at lag 2&3: 0.258, 0.317).

The only way I know to account for autocorrelation is through a AR model so I redid it with a model of the form y(t) = phi*y(t-1) + alpha + beta*x(t) + err. The rest of the model was the same with very uninformative priors and phi prior was N(0,1). Phi estimate was 0.21 with SD 0.08 which is close to the value in the data. The slope estimate reduced from 10.22F to 8.55F with 95%CI of +/-8.67, so not stat significant taking this lag1 autocorrelation into account. I redid it again with up to lag 3 auto correlation and the slope reduced to 5.6F with 95%CI+/-8.6F. The phi1-3 estimates were 0.12, 0.16 & 0.23 with SD’s around 0.08. In your estimate did you only allow for lag1 autocorr or was it the more elaborate method you outlined here before which takes longer lag correlations into account?

@Joe H,

I just happened across this discussion. Now that REML is common enough in

R, per theglspackage used there, there’s really little reason to use maximum likelihood blindly or to open oneself out for issues pertaining to correlation. Including an AR1 correlation structure, as Luis Apiolaza does there, is simple enough.Isn’t it almost completely autocorrelated? Most of the temperature variability is in ENSO and other climate dipoles, which are essentially sinusoidal cycles (which are by definition autocorrelated). There really is no true randomness, apart from possible volcanic disturbances.

[

Response:No, it’s not completely autocorrelated. Also, ENSO and other climate phenomena do not follow sinusoidal, or even nearly sinusoidal cycles. That belief is probably an unfortunate consequence that the word “cycle” can be used in more than one way. The sense which applies to ENSO (and others) is not that they are truly periodic (or even nearly so). They don’t recur at regular intervals, but neither do they fail to recur.]I show that ENSO is the result of a straightforward lunar tidal forcing at next month’s AGU meeting, making the behavior as deterministic and auto-correlated as ocean tides. And will also demonstrate via the stratospheric QBO using the same lunar tidal forcing, which is even more obvious:

https://agu.confex.com/agu/fm17/meetingapp.cgi/Paper/221914

This analysis is related to research on the

topological origin of equatorial wavesas described in this Science paper from last week:http://science.sciencemag.org/content/358/6366/1075

In no way does this contradict the current climate models, it really just identifies a simplification of the geophysics and forcing that can be applied to the next generation of models.

[

Response:ENSO is not “auto-correlated as ocean tides.” That’s just a fact. I’ve seen the data. If your theory requires that it is, I suggest you re-think your theory.]I am not at the level of stating anything as fact All I do is apply the known lunar forcing to the accepted models of ENSO dynamics (Laplace’s tidal equations, the Cane & Zebiak model, and the general delayed differential model) and find that a cross-validated match is possible. Like any orbital or tidal calculation, if any deviation is applied to the known lunar cycles, the quality of the fit rapidly deteriorates. These aren’t perfectly linear models so that the cyclic results are not perfectly sinusoidal and instead are filled with rich harmonics. That is particularly apparent with the QBO model which shows almost squared-off cycles.

What has to happen for me to rethink the model is that someone else will need to invalidate the results by showing that the specific lunar forcing

does notrecreate the ENSO periodicity observed. That’s also the state of ocean tidal analysis — no one has been able to invalidate the basic model based on the known lunar and solar periods. There is no controlled experiment one can do, and so conventional forced tidal analysis is accepted because the basic model continues to match the empirically observed results.Anthony keeps a close watch on his audience for years. A few are allowed to criticize, but never too many. If you get too close to their comfort zone you get banned.

This is clearly a strategy. Therefore he knows what he is doing. He’s a fraud.

I think it’s appropriate to emphasize, once again, Eli’s post about

Tom’s trick and experimental design. This is arguably better than any Bayesian Black Magic I or anyone else can come up with. In fact,Tom’s trickwould be awesome if the funding and will were there to collocatemoreUSCRN and USHCN stations and do longitudinal repeated measures comparisons. And, while there are Frequentist packages, there is also the BayesianMCMCglmmpackage forR, which I use heavily, even (especially?) for time series.(And, as a learning vehicle, might I recommend Fitzmaurice, Laird, Ware,

Applied Longitudinal Analysis, 2nd edition, Wiley, 2011.)[

Response:On a totally unrelated topic:My move toward Bayesianism has raised an issue: that frequentist results are generally more familiar to others — lots of readers need to hear a “p-value” or they’re not sure what stuff means. Yes I could give a Bayes factor and/or posterior probability, but I wonder whether that will be more confusing than enlightening to some, especially the ones I feel the greatest need to reach. I know, it’s not an issue statistically but in terms of communication.

I also know it’s one of the things that makes this blog interesting — to a very limited audience, so I have the further quandary that what I’d rather talk about isn’t what I feel I need to talk about. My personal preference (pure mathematics) doesn’t match my social need (to persuade people to act)… but now I’m just letting off some steam.]One definition of Bayesian statistics I saw somewhere online went along the lines: “Bayesian statistics is everything you thought traditional frequentist statistics was”.

I’m not a strong believer in one versus the other but I do like that Bayesian stats is much more information rich, provides evidence for or against the alternate hypothesis and seems a lot more intuitive. Null Hypothesis Significant testing has some weird a$$ negative logic to it and probably the one of the most common errors in stats is people complementing the NHST result to provide a ‘probability’ that the alternate hypoth is true. P-values have problems around stability and a simple presentation I have seen showed where in random sampling from a population the p-value varied all the way from 0.001 up to >0.8 for just 20 samples which was a major eye opener. I note that some journals are starting to go negative about p-values and insist they are not quoted or only quoted in context of effect size etc.

Kruschke in his ‘Doing Bayesian Analysis’ has some good material on a practical Bayesian alternative to NHST. In the meantime, maybe a combination of the two is in order until people become more familiar and comfortable with Bayesian stats. Frequentist will always have a role I believe as doing major computer number crunching is not always an option particularly in classrooms.

@Joe H,

On Bayesian computation, there are tricks to improve speed. Kruschke talks about some. While it might be eschewed by Bayesian purists, the other way to improve speed of what normally would be long Markov Chain Monte Carlo (MCMC) runs is to solve the model using a frequentist approach, and then initialize the MCMC at the point solution found by the frequentist code.

Not interested in getting into the

empirical Bayesianversuspure Bayesianthing. I’m clearly of the former kind of thinking. The purer Bayesians would object because I’m thenlooking atorusingthe same data twice. I’ve never been quite sure what preciselylooking atmeans in that context. That’s a Deborah Mayo/Judea Pearl kind of question, quite too deep for me.Richard Lindzen, professor of Meteorology at MIT, calls himself a denier in

this BBC interview

Lindzen says he doesn’t like being called a “skeptic”: “to be skeptical assumes there is a strong presumptive case, but you have your doubts. I think we’re dealing with a situation [ climate science ] where there’s not a strong presumptive case”. When pressed by the BBC interviewer as to what he would like to be called he said: “I actually like denier”.

I prefer “denier” as well as a convenient label to classify these clowns.

Wow! That may be the most honest thing Richard has said in decades!

Tangentially related: I think we are all aware that one of the ‘Administration tricks’ going on is to damage Federal institutional capability to monitor and protect the environment, either by appointing dysfunctional administrators or by denying funding. Or, indeed, both.

However, the current tax proposals before both House and Senate involve attacking higher education itself in multiple ways. The most mean-spirited (to me, at least) is the proposed reclassification of grad assistant tuition waivers as taxable income, which would result in increasing GAs tax bills by (typically) factors of 2-3x, without any increase in real income. That would imperil the educational plans of many current GAs and mandate a drastic restructuring of research and instruction at many US institutions.

But it doesn’t stop there. Here’s a link to summaries of the situation:

http://www.nacubo.org/Initiatives/Tax_Reform.html

I personally think that it merits the title “war on college.” YMMV, of course.

I see that Willis Eschenbach has been busy posting on the planet Wattsupia that, contrary to what is claimed here, he isn’t actually a denier. Rather he is a seeker after truth and this OP here is

“just scummy”because he is unable to reply here.So perhaps the argument he sets out can be here presented.

Willis says he wants to check the temperature trend 2005-2017 for the US. Luckily for Willis this is exactly the period of data provided by USCRN data. (Of course, being a half-wit, Willis doesn’t manage to explain what is special about 2005-17 except that 2005 is when the USCRN data begins.)

Still, that said, Willis sets out his argument against using data other than in the period 2005-17 saying:-

Of course, call me Mr Pedant, but in my understanding the year 2004 came directly before the year 2005 with the year 2003 before that again and so on. And knowing that does show a great deal about the period 2005-17 and the existence of any change in trend at 2005. So plotting the NOAA trend 2005-16 (using calendar years) yields the exact same line as plotting it 1993-2016. And because there are more years, there is more data and voila, it passes Willis’s statistical significance test.

So it is probably best that Willis keeps away from here as otherwise the world will see his grand theorising being demonstrated as being deluded nonsense rather too frequently. (Of course, being a denier, Willis simply wouldn’t notice.)

[

Response:The relevant fact is that his own numbers (as well as mine) show that the trend you estimate from such a short time span as is covered by USHCN data is *so* uncertain, it adds nothing to our knowledge. I find it hard to believe that he doesn’t know this, which makes me wonder, why did he post about it? If his *point* was that a trend estimate from such a short time span adds nothing to our knowledge, why not title the post “A Trend Estimate from Such a Short Time Span Adds Nothing to our Knowledge”?Will Willis Eschenbach answer that question? Perhaps someone should ask.]Fortunately we can use the validated stations from the Surface Station project run by WUWT to find a “better” result. That has been done and the result is the same as the regular NOAA trend.

Steady, now!!

Was Watts et al (2015) ever published? I do recall that vacuous blog-mom Judy Curry describing it as

“robust”but she may have been wrong (again) it may have been a relative term, relative perhaps to Watts (2009) which for all its nonsensical blather at least managed to get published by the Heartland Institute. So whither Watts et al (2015)?Michael is probably thinking of this 2011 paper:

Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trendshttp://onlinelibrary.wiley.com/doi/10.1029/2010JD015146/full

From the abstract:

“The opposite-signed differences of maximum and minimum temperature trends are similar in magnitude, so that the overall mean temperature trends are nearly identical across site classifications.”Al Rodger,

I have a strong recollection that when Watts posted a list of about 90% of their “validated” sites NOAA used them to make a comparison to the existing (all sites) data. The correlation between the two graphs was unbelievable. Watts then said they needed to wait until all the sites were evaluated. I don’t know if the comparison was ever published.

With a quick Google I could not find the graph.

Skeptical Science has an article on the comparison of all NOAA data to just using Watts selected sites at https://www.skepticalscience.com/On-the-reliability-of-the-US-Surface-Temperature-Record.html which shows a comparison graph. The graphs are virtually identical.

The paper is Menne 2010 https://www1.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/menne-etal2010.pdf

I noticed that this paper was rarely cited. Apparently scientists were not worried about the surface station project. I think Bob Loblaw’s paper is a response to Menne 2010.

In the context of Menne et al (2010), the 2011 paper linked & cited up-thread by Bob Loblaw (Fall et al 2011, with Willard as a co-author) is a curious piece of writing.

Menne et al conclude

“In summary, we find no evidence that the CONUS average temperature trends are inflated due to poor station siting”which sounds ‘conclusive’ enough.Fall et al, while setting out nuanced objections to the findings of Menne et al, then somehow manage to come up with the following conclusions:-

This reads more like a conjouring trick than anything else.

As for the relevance of either paper to research, both do feature in references of other works but not for their primary findings but rather in discussion of the existence of urban heat islands or in the availability of metadata (or lack of it), etc. The primary finding of Menne et al & the conjuring trick of Fall et al are no longer a consideration.

Well, it’s not

mypaper, but I know what you mean. The thing that struck me about the Fallet alpaper is that Watt’s name is on it (along with several more of “the usual suspects”)..Tangentially related–over on RC, this Nature paper on model/observation intercomparisons was linked. Sez there the best-performing models also yield the ‘warmest’ projections to end of century.

https://tinyurl.com/Brown-Caldeira17ModObsStudy

Sadly, one implication is that the warming associated with RCP 4.5 could actually be more similar to what we’ve been expecting from RCP 6.0.