Monthly Archives: April 2012

Let’s do the math!

Roger Pielke’s post which we criticized now has seven updates. Seven! He has protested, I would even say whined, that I and my readers have treated him unfairly. He has accused me of a lack of “professional courtesy” for such horrible deeds as blogging under a pseudonym. He even went to the trouble to dig up my real name and post my hometown location on his blog. How professionally courteous of you, Roger. That certainly advances our understanding of sea ice trends.

What he still hasn’t done is: the math.

Continue reading

Do the Math

Roger Pielke Sr. claims that northern hemisphere sea ice has not declined as fast as computer models predicted. Yet we’ve often heard the opposite, that northern hemisphere sea ice is declining faster than predicted by computer models. How does Pielke arrive at the opposite conclusion?

Continue reading

Defense Against the Dark Arts

Last summer we examined some first-class misinformation about global warming from the “Heartland Institute,” namely their NIPCC report. We showed that their fake take on Arctic sea ice revealed that they’re not real skeptics, they’re fake skeptics.

Continue reading

More Methane

It wasn’t that long ago that The Independent reported the detection of unprecedented methane (CH4) emissions from the sea bed of the East Siberian Arctic Shelf by Russian scientists. This has spawned considerable speculation, and concern, about sudden increase of methane concentration in the Arctic due to extreme Arctic warming, which could potentially cause a nasty global-warming feedback since methane is a potent greenhouse gas. That some are very concerned is no surprise.

Continue reading

Nothin’ but Noise

Pat Michaels claims (also here) that the journal Nature has lost its credibility. That’s an extraordinary claim, considering that Nature is one of the most prestigious peer-reviewed science journals in the world. There are those who believe Pat Michaels is the one lacking any credibility.

Continue reading

Roy Spencer, man of mystery

Apparently, Roy Spencer believes that warming indicated by data from the USHCN (U.S. Historical Climate Network) is almost entirely false. He likewise distrusts the trend for the U.S. estimated from the CRUTem3 data. More to the point, he seems to think that warming over the U.S. for the last several decades has been negligible. All but a pittance (a mere 0.013 deg.C/decade), he says, isn’t real, it’s just due to “adjustments.”

Continue reading

Jane

For decades, one of my most revered heroes (heroines if you prefer) has been Jane Goodall.

Continue reading

March Madness

The National Climate Data Center has updated their temperature data for individual states and for USA48 (a.k.a. the conterminous United States, a.k.a. the “lower 48”). The headlines are that this March was the hottest March on record nationally, and this year’s 1st quarter (January through March) was likewise the hottest on record. Much of the central U.S. was as much as 15 deg.F hotter than average this March:

Continue reading

L is for “linear”

A previous post addressed some issues with linear regression, “linear” meaning we’re fitting a straight line to some data. Let’s devote another post to scrutinizing the issue — so this post is all about the math, readers who aren’t that interested can rest assured we’ll get back to climate science soon.

It was mentioned in a comment that least-squares regression is BLUE. In this acronym, “B” is for “best” meaning “least-variance” — but for practical purposes it means (among other things) that if a linear trend is present, we have a better chance to detect it with fewer data points using least-squares than with any other linear unbiased estimator. “U” is for “unbiased,” meaning that the line we expect to get is the true trend line. Both of these are highly desirable qualities.

Finally, “L” is for “linear,” which in this context has nothing to do with the fact that our model trend is a straight line. It means that the best-fit line we get is a linear function of the input data. Therefore if we’re fitting data x as a linear function of time t, and it happens that the data x are the sum of two other data sets a and b, then the best-fit line to x is the sum of the best-fit line to a and the best-fit line to b. In some (perhaps even many) contexts that is a remarkably useful property.

Continue reading

Gutenberg-Richter

In a comment on the last post, it was mentioned that the frequency of earthquakes of any given magnitude or greater will be given by the Gutenberg-Richter law. It states that the expected number of earthquakes in a given region over a given span of time, of a given earthquake magnitude or greater, will be

N = 10^{a-bM},

where M is the quake magnitude, a and b are constants, and N is the expected number. For active regions, the constant b usually has a value near 1.

Continue reading