Northwest Heat Wave


When extreme heat gripped the Pacific northwest recently, people noticed. They noticed in Seattle, Washington, where they set an all-time record high of 104°F on June 27th, only to break it the next day at 108°F. They noticed in Portland, Oregon, where they set the all-time record high of 112°F on June 27th, only to break it the next day at 116°F. They noticed in Lytton, Canada, where they set the all-time record high for all of Canada at 121°F, only to burn to the ground the following day.


Naturally this has led to speculation about the relationship of this particular heat wave to man-made climate change (global warming). One of the reasons we expect global warming to increase extreme heat, is illustrated in this graph:

It compares two probability distributions, one (in yellow) with a lower average, another (in red) with a higher average. As the average (i.e. the mean) temperature increases, the amount of extreme heat increases dramatically — provided, of course, that the shape of the distribution remains the same.

<!–more–>


Does the shape of the distribution remain the same? Of course not — at least, not perfectly so. But it turns out that for summer in the Pacific Northwest, the change in shape is quite small, in fact so small it’s hard to confirm it exists at all, while change in the average sticks out like a sore thumb.


Let’s use data for daily high temperature averaged over the “Pacific Northwest” as defined by the study region of a recent attribution study of the connection between this event and climate change, shown by the box in this map:


It covers the region from latitude 45°N to 52°N, longitude 119°E to 123°E. The data are from the ERA5 re-analysis data set, which was also used in that attribution study. It’s provided in Kelvins, but I converted that to degrees Fahrenheit for a more familiar temperature scale (at least, for my American readers). And here it is, daily temperature for the study region from 1950 through the end of June 2021:

Here’s data for just temperatures over 85°F, to show where the extreme heat has happened:


Note that before this year, the average temperature throughout the region never exceeded 95°F, but this year it easily broke 100°F.

I’ll define summer as the months June, July, and August. Here’s the data for just the summer months:


It’s straightforward to average the temperature during each year, except we should leave out the year 2021 because it’s not over yet. Averages for each year (except 2021) look like this:


The red line is the linear trend (from linear regression), is strongly statistically significant (p-value < .0001), and indicates that the average summertime temperature increased 3.9°F from 1950 through 2020.


There’s no doubt that the average has changed over time — nobody in his right mind disputes that. But what about the distribution itself? Has its shape changed, and in particular, what has happened to the “extreme heat” end of the distribution?


Let’s look at the hottest temperature for each year, but just for fun let’s omit this year’s record-breaking value:


The trend line (in red) is estimated by linear regression, is strongly statistically significant (p-value 0.001), and suggests an increase in yearly maximum temperature of 4°F from 1950 through 2020. So, for summer at least, it’s not just the average that has gotten hotter, so too has the yearly maximum value.


Still, I’d like to get at the distribution itself for summertime. To that end, I’ll transform temperature into temperature anomaly by subtracting, for each value, the average for that time of year. Doing so, I remove the annual cycle from the data (which is still present because early and late summer are cooler than mid-summer). Here are the anomalies:


They aren’t too different from the temperature data itself.


Now let’s separate the anomaly data into two big intervals of time: “early” from 1950 to 1985, and “late” from 1985 to now. We can estimate the probability distribution for each by making a histogram, as well as a smooth estimate, and I’ll compare them with pre-2000 data in blue and post-2000 data in red (solid lines are smoothed estimates):


It’s rather striking how almost all of the negative anomalies are less likely lately than earlier, and almost all of the positive anomalies are more likely now than before. It certainly seems plausible that the “late” distribution has the same shape as the “early” distribution, but shifted to the right, toward higher temperatures.


Let’s focus on the extreme-heat end by looking at the survival function (which is 1 minus the cdf, or cumulative distribution function) for high temperature anomalies:


There’s no doubt, the Pacific northwest is getting more extreme heat. But is that just because the distribution shifted to the right, or is shape change part of the story?


Let’s compute probability distributions, not for the anomalies as we just did, but for the anomalies offset by a constant to make their average values the same (in particular, zero). I’ll call these “De-Trended” anomalies (a crude term). Here are the histograms and smooth estimates of the probability density function:


There may be some shape change, but perhaps not — perhaps the differences are just random fluctuations within the uncertainty range. We’ll get a better idea of what’s happening on the high end with the survival function:


Again, it’s plausible that they are the same because they are almost entirely within each other’s uncertainty range. It is possible that the distribution has changed at the high end with the appearance of de-trended anomalies not before see, but the statistics don’t confirm that yet.


Perhaps the “go-to” statistical test for whether two distributions are the same, is the Kolmogorov-Smirnov test. When I use it to compare the anomalies (not de-trended), it proves they are different (p-value 0.00000000000000022), which of course they are because they have a different mean value. But when I compare the de-trended anomalies, there is no significant evidence of a difference between the distributions. The p-value is 0.4974 — not even close to significant.


My bottom line: The evidence demonstrates that extreme heat has gotten more frequent and more severe, and we can expect it to continue, because the probability distribution has shifted to hotter values with no confirmable change in its shape. But it is possible (not yet confirmed) that the recent heat wave is so much hotter than we’ve seen before that there’s more going on — which means we can expect even worse.


Yet people continue to dispute even the simple idea, which is easily shown for the Pacific northwest, that the increase in the average has brought with it an increase in extreme heat. One such is Judith Curry (from whom I got the first graph shown), who gives references to the peer-reviewed literature which I went and read, only to make me wonder whether or not she read them. At least she delivers on the promise of her blog title: hot air.


This blog is made possible by readers like you; join others by donating at My Wee Dragon.

48 responses to “Northwest Heat Wave

  1. Nice read u again. Thx.

  2. Thanks, good as usual.
    Any chance you can give us skew & kurtosis of the 2 distributions? I’m curious how close these are to Gaussians and especially if right tail is a little heavy.

  3. Thank you Tamino!!!

  4. The other way a trend might be found is to estimate the Hurst exponent of the anomaly series, that is, the series with the annual cycle removed. In particular, it might be rhetorically more powerful to demonstrate trend in years leading up to the sh’bang year of 2021.

    Hurst exponents are typically pursued for hydrology and in finance. Here’s an informal introduction. The best technique I know for getting at these is:

    Knight, Marina I., and Matthew A. Nunes. “Long memory estimation for complex-valued time series.” Statistics and Computing 29, no. 3 (2019): 517-536.

    and that’s implemented in the CliftLRD package of R. There is a Bayesian approach, per

    Makarava, N., and M. Holschneider. “Estimation of the Hurst exponent from noisy data: a Bayesian approach.” The European Physical Journal B 85, no. 8 (2012): 1-6.

    which is what I would prefer. But there’s no R implementation of it AFAIK. If I were still working and interesting in geophysical series, I’d probably implement it. But I’m retired and doing quantitative bryology now.

    I also didn’t see a reference to how the annual cycle was removed, although I’m sure it’s fine. There are very general definitions for “annual cycles” available, though, in dynamic linear models work which, for instance, don’t require the cycle to have the same shape or phase each year of a series. Standard reference is Harvey (1990):

    Harvey, Andrew C. Forecasting, structural time series models and the Kalman filter, 1990..

    • Hurst exponents show the extent of departure of geophysical series from normal distributions. Dimitris Koutsoyiannis’ new book is informative.

      Click to access StochasticsOfExtremes1.pdf

      • In this interpretation of Hurst, is that index of departure only defined for members of the Exponential Family? Or is it more general that that?

      • Hurst’s statistical analysis of the long Nile River record revealed structure in the data – long term persistence and transitions – that was not independent or random Gaussian white noise. Dimitris Koutsoyiannis has dubbed it Hurst-Kolmogorov stochastic dynamics.

        e.g. https://www.tandfonline.com/doi/full/10.1080/02626667.2015.1125998

        ‘Climate dynamics is characterized by high complexity since it involves the spatio-temporal evolution of numerous geophysical variables (i.e., multivariate stochastic processes) interacting with each other in a nonlinear way, forming, among others, the hydrological cycle. Nevertheless, even if we could determine a set of physical laws that describe in full detail the complexity of climate dynamics, it would be impossible to combine the equations for the purpose of predictability due to the existence of chaos, i.e., a nonpredictive sensitivity to initial conditions.’ https://www.mdpi.com/2306-5338/8/2/59/htm

        The spatiotemporal evolution of Earth’s flow field has many characteristic nonlinear globally coupled oscillators seen as ocean and atmospheric indices. If the flow is perturbed these turbulent patterns – governed by the incalculable 3-D Navier-Stokes partial differential equations – may change a little or a lot.

      • I will refer to

        O’Connell, P. E., D. Koutsoyiannis, H. F. Lins, Y. Markonis, A. Montanari, and T. Cohn. “The scientific legacy of Harold Edwin Hurst (1880–1978).” Hydrological Sciences Journal 61, no. 9 (2016): 1571-1590.

        as “The Paper”.

        Other quotes from The Paper, lest the incorrect impression be given that its co-authors thought detection and attribution of climate disruption was not possible:

        Koutsoyiannis (2003) focused on the problem of modelling a highly variable climate with indistinguishable contributions from natural climatic variability and greenhouse gas emissions, and advocated a stochastic modelling approach that respects the Hurst Phenomenon. Noting that classical statistics are deficient when used to characterize a highly variable climate, he developed a method of jointly estimating the unknown variance and Hurst coefficient of a time series.

        In a further paper, Koutsoyiannis (2006a) argues that long-term trends in hydrological time series should not be regarded as deterministic components of a hydrological times series, and therefore indicating nonstationarity, unless there is a clear physical explanation. Otherwise, such trends are best regarded as parts of irregular long-term fluctuations that underlie the Hurst Phenomenon, and stationary stochastic models obeying Hurst’s Law offer the best means of quantifying the large hydrological uncertainty for water resources planning under a highly variable climate.

        The Paper’s authors are in no doubt unhappy about the practice of climate science statistics, as will be seen below.

        I like the quote from Cohn and Lins (2005) who “… suggest ‘From a practical standpoint … it may be preferable to acknowledge that the concept of statistical significance is meaningless when discussing poorly understood systems.’” I daresay that, in my opinion, the concept of statistical significance is meaningless in any application.

        Alas,

        Tyralis, Hristos, and Demetris Koutsoyiannis. “A Bayesian statistical model for deriving the predictive distribution of hydroclimatic variables.” Climate dynamics 42, no. 11-12 (2014): 2867-2883.

        started out on a promising premise, trying to build what hopefully could have been a Bayesian hierarchical model involving hyperpriors on a number of parameters. But Tyralis and Koutsoyiannis chose the uninformative route, and plucked an example from Robert (2007) which they assumed meant it was appropriate for their analysis. I mean, they can pretend to be ignorant about the state of the world if they wish to play that game, but, then, they aren’t really doing climate science because they are not modeling what’s known. Contrarians on the efficacy of excess greenhouse gas emissions explaining significant warming (and I do not mean statistical significance when I say “significant”), however well mathematically endowed they might be, need to produce a mechanism in a model as to why increased concentration of CO2 and associated water vapor does not have an effect, because that effect is physically substantial. I never seen any such explanations in these Gentlemens’ Club discussions. Anyway, as noted, Tyralis & Koutsoyiannis is a complete toy, and, after reading it, it should not be taken seriously in the least. Oh, and they report p-values. Some Bayesian analysis!

        The Paper gets some things wrong, too. For example, they state

        With such strong conviction for a static climate, it is not difficult to explain the dominance of the current hypothesis that climate change can only be driven by external, non-natural factors, namely by anthropogenic CO2 emissions. This is the basis for the modern prevailing theory of climate change.

        And despite The Paper being written in 2016, it declares

        Until recently, the climatology community and the IPCC refused to recognize the connection between HK behaviour and climate. Ironically, it was only after the so-called “pause”, i.e. the rather stable global temperature of the 21st century, that the IPCC in its Fifth Assessment Report acknowledged the importance of natural climatic variability and its statistical implications.

        That so-called “pause” or hiatus was a chimera as I have noted, although there was little confirmatory evidence at the time. Even if one buys the specious logic of hypothesis testing, some papers did it badly. The hiatus notion was blown away in the sequel.

        In the same, the comment

        However, the Summary for Policymakers (IPCC 2013) does not mention LTP, although it speaks about internal climate variability …

        is a bit misleading but might be written in ignorance of the political process whereby the Summary for Policymakers is derived from the correponding scientific report.

      • ‘Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.’
        Uncertainty in weather and climate prediction, 2011, Julia Slingo and Tim Palmer

        The hiatus is most clearly seen as a phase space shift – to a negative Interdecadal Pacific Oscillation – in the eastern Pacific with low level cloud feedback.

        ‘This study examines changes in Earth’s energy budget during and after the global warming “pause” (or “hiatus”) using observations from the Clouds and the Earth’s Radiant Energy System. We find a marked 0.83 ± 0.41 Wm−2 reduction in global mean reflected shortwave (SW) top-of-atmosphere (TOA) flux during the three years following the hiatus that results in an increase in net energy into the climate system. A partial radiative perturbation analysis reveals that decreases in low cloud cover are the primary driver of the decrease in SW TOA flux. The regional distribution of the SW TOA flux changes associated with the decreases in low cloud cover closely matches that of sea-surface temperature warming, which shows a pattern typical of the positive phase of the Pacific Decadal Oscillation.’ Changes in Earth’s Energy Budget during and after the “Pause” in Global Warming: An Observational Perspective, 2018
        by Norman G. Loeb, Tyler J. Thorsen ,Joel R. Norris, Hailan Wang and Wenying Su

        The fundamental mode of the Earth system is shifts in globally coupled patterns of ocean and atmospheric circulation driving changes in ice, atmosphere, hydrosphere, cloud and biology. Hurst-Kolmogorov dynamics superimposed on which is greenhouse gas warming. Way back in 2002 the NAS said that this may trigger unwelcome surprises.

        Can we distinguish an intensification of the hydrological cycle against a backdrop of intense variability?

        e.g. https://hess.copernicus.org/articles/24/3899/2020/

        Does it matter more than as an intellectual exercise?

        https://watertechbyrie.com/2021/05/01/capability-browns-oblique-approach-to-climate-policy/

      • While I respect students of Hurst and H-K phenoms, I consider it fraudulent to the degree to which this is being used as an outright sophist smokescreen for outright climate denial. With your insistence it is legitimate on this narrow line, I suspect you are.

        You have not addressed my specific technical challenges but continue to pile on quotes from your religious saints. I at least read the papers and argued with them. I think you are opportunistically wasting all our time, certainly mine. I’m going back to my mosses.

      • The quotes are all from peer reviewed articles by highly regarded scientists. Seriously – leaders in the field. I’d suggest you don’t have a clue – obvious from the start – and are now playing silly political games.

      • Dr Ellison,

        Applied maths is a big field. We can’t all be experts on everything. The papers you pointed out were new to me, but Hurst things weren’t, as I used them quite profitably to model Internet traffic a few years ago. I may be wrong, but I think pushing it into climate geophysics is a stretch. I stand by my opinion of that paper by Tyralis and Koutsoyiannis: It is superficial, even if narrowly correct.

        As far as being “highly regarded” and “peer reviewed”, you’re a fan of Swanson and Tsonis (2009), too. Epicycles on epicycles.

        R. A. Fisher was “highly regarded”, and in

        Fisher, Ronald A. “On the mathematical foundations of theoretical statistics.” Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character 222, no. 594-604 (1922): 309-368.

        wrote the following:

        There would be no need to emphasise the baseless character of the assumptions made under the titles of inverse probability and BAYES’ Theorem in view of the decisive criticism to which they have been exposed at the hands of BOOLE, VENN, and CHRYSTAL, were it not for the fact that the older writers, such as LAPLACE and POISSON, who accepted these assumptions, also laid the foundations of the modern theory of statistics, and have introduced into their discussions of this subject ideas of a similar character.

      • Hurst analysed the long record of Nile River levels – Kolmogorov studied turbulence. The idea originated in on the one hand hydroclimatic series and on the other the fractal dynamics of fluid flow.

        Big whorls have little whorls
        Which feed on their velocity,
        And little whorls have lesser whorls
        And so on to viscosity.

        Lewis Fry Richardson 1922

      • The idea originated in on the one hand hydroclimatic series and on the other the fractal dynamics of fluid flow.

        Sounds like something out of an oceanographic course I took once, “Descriptive and physical oceanography.” The canonical cases were systems of eddies breaking off of major currents like the AMOC. Closest I got to it was studying this:

        Shuckburgh, Emily, Helen Jones, John Marshall, and Chris Hill. “Understanding the regional variability of eddy diffusivity in the Pacific sector of the Southern Ocean.” Journal of Physical Oceanography 39, no. 9 (2009): 2011-2023.

      • @Robert I. Ellison,

        Okay. I’ve spent a chunk of today thinking about the Hurst-Komolgorov hypothesis of climate disruptive events, like wildfires and flooding. Here’s my problem with it. I do not think that, empirically, it’s possible to create an experiment which falsifies it. Such an experiment would need to be constructed in the Neyman-Rubin causal framework and that cannot be done.

        Accordingly, if it — or for that matter the Swanson and Tsonis or teleconnections schools — cannot be falsified, they are, as Professoir Woit has remarked with respect to String Theory, “Not even wrong.” They are not science.

        Give us a doable experiment that could falfisify them, consistent with Neyman-Rubin, and then they can be taken seriously.

      • yup, what ecoquant said. Some things being spouted are not even wrong, they are not science, not scientific. Belief systems are powerful. I believe in the scientific method. Gonna take my chances with that one. I like data and experiments and analysis that are well-founded and subject to rigorous evaluation and replication.

        Cheers
        M

      • Certainly flooding – as such emerges from spatiotemporal chaotic patterns of ocean and atmosphere circulation. Empirical observation not good enough? But I do recall – somewhat tongue in cheek – an experiment reported by the American Institute of Physics.

        “Hints that the climate system could change abruptly came unexpectedly from fields far from traditional climatology. In the late 1950s, a group in Chicago carried out tabletop “dishpan” experiments using a rotating fluid to simulate the circulation of the atmosphere. They found that a circulation pattern could flip between distinct modes. If the actual atmospheric circulation did that, weather patterns in many regions would change almost instantly.” https://history.aip.org/climate/rapid.htm

        Climate is a fluid flow problem – governed by the nonlinear set of Navier-Stokes partial differential equations in 3 dimensions – in which shifts in ocean and atmosphere circulation drive changes in ice, cloud, biology… I am fairly certain that this is the dominant climate science paradigm.

      • Yes, Navier Stokes, but no one understands them in their bare form, so books like Kundu and Cohen (3rd ed., 2004, Fluid Mechanics), Pedlosky (2003, Waves in the Ocean and Atmospherre), and Pond and Pickard (2nd ed., 1983, Introductory Dynamical Oceanography) are filled with special cases for their own Reynolds Number ranges of interest and situations. Even K&C chop it up into sections like Irrotational Flow, Instability, Turbulence, Compressible Flow, and even devote a section to GFD with stuff like Ekman Transport. So if it’s all chaos and nothing can be known, how are all these individual specialtes even possible? Are you saying according to Mandelbrot and Lorenz it’s all useless?

        Don’t think so. It’s predictable enough to build things like Flettner Rotors and Tesla Valves. And climate models with predictive skill.

      • It seems open minds have limits. Bye,

      • The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.

        Terry Pratchett

        Yes, and then you need to clean house.

      • The “leaders of the field” are the leaders of a cult and a mutual citation cartel. After getting a scientific job one of their PhD students told me that they were not allowed to talk to me and my fellow statistical climatologists.

        Very insightful was an EGU session where a real scientist asked a question to a PhD student and one of the leaders of the cult jumped up to answer it. Starting with the worst possible sentence in science: “First of all, you have to believe in Long Range Dependence.” No, buddy, you have to provide evidence and use reliable tools.

        Nature likely shows some LRD, but most of what these fools are analysing are measurement problems. That is what you get when people without a clue overestimate themselves so much that they think they do not need to collaborate with domain experts.

        Here is one example for temperature. Measurement problems are likely even worse for run-off, where people measure levels, but analyse run-off, while this relationship constantly changes.

        Rust, H.W., O. Mestre, and V.K.C. Venema. Less jumps, less memory: homogenized temperature records and long memory.)
        JGR-Atmospheres, 113, D19110, doi: 10.1029/2008JD009919, 2008.

      • ‘A statistical explanation of the so-called “Hurst Phenomenon” did not emerge until 1968 when Mandelbrot and co-authors proposed fractional Gaussian noise based on the hypothesis of infinite memory. A vibrant hydrological literature ensued where alternative modelling representations were explored and debated, e.g. ARMA models, the Broken Line model, shifting mean models with no memory, FARIMA models, and Hurst-Kolmogorov dynamics, acknowledging a link with the work of Kolmogorov in 1940.’ https://www.tandfonline.com/doi/full/10.1080/02626667.2015.1125998

        ‘By ‘Noah Effect’ we designate the observation that extreme precipitation can be very extreme indeed, and by ‘Joseph Effect’ the finding that a long period of unusual (high or low) precipitation can be extremely long. Current models of statistical hydrology cannot account for either effect and must be superseded. As a replacement, ‘self-similar’ models appear very promising. They account particularly well for the remarkable empirical observations of Harold Edwin Hurst.’ Mandelbrot and Wallis, 1968, Noah, Joseph, and Operational Hydrology

        A cult of hydrodynamics? I should probably stop there. LRD – implying Infinite memory – even in 1968 was criticised for being inconsistent with Markovian models of physical processes. The Lorenzian model for abrupt climate shifts – AKA tipping points – fits better. At many scales.

    • This suggestion of mine does not, I believe, survive either introducing serial correlation or non-stationarity. So my second long comment is more helpful.

  5. While I’m glad to see you back, I would rather there stopped being so much climate change for you to write about.

  6. So glad to see you back, Tamino. I was beginning to worry about you.

    Loved the way you shut down Cliff Mass at RC, and now with this analysis here. Mass’s assertion that there has been no trend in temperature in the PNW was astoundingly ignorant for a professor of Atmospheric Sciences, as you just demonstrated above.

  7. I’m glad to see you write again.

  8. Welcome back. A joy to read your analysis. That was an enormous outlier heatwave. The uncertainty monster is not our friend.

    Let’s look at the hottest temperature for each year, but just for fun let’s omit this year’s record-breaking value

    This omitting of the current year is something you only did for the plot with the yearly maxima, right? Later figures again included 2021?

    Especially in the plot with the the survival function of the detrended values the reason there is such a thick red tail and that the tail extends 10°F further are the values from 2021? So basically just one data point.

  9. Nice to see your analysis and calculations again, that first plot with the two Gaussian like PDFs makes it crystal clear that what used to be just a, say, 6 sigma event should be, after a shift to the right due to increased mean temperature, now a 4 or even just a 3 sigma event, the tail above the extremely hot temperatures just gets fatter, hence the probability of it occurring is now much higher.

    As such, I found this BBC article very strange:

    “Top climate scientists have admitted they failed to predict the intensity of the German floods and the North American heat dome.
    They’ve correctly warned over decades that a fast-warming climate would bring worse bursts of rain and more damaging heatwaves.
    But they say their computers are not powerful enough to accurately project the severity of those extremes.”
    Science failed to predict flood and heat intensity

  10. I can only echo the happiness of others to see another delicious skewering.

  11. Good to see you back, Tamino! A lot going on this year….

  12. John Garland

    3 things:

    1st: It is SO good to know you are OK enough to blog again after such a long “hiatus”.

    2nd: The most shocking thing to me in the Mass trend “analysis” was taking 70 data points, then aggregating them down into 7 points by decade and then saying “there’s no trend in the data, and I’ll show you if you don’t know how to test for a trend” which contains an implied “you ignorant fool”, of course. There is simply no statistical power to be had with a regression and 1 and 5 degrees of freedom: Even if there had been 30 new records in the 2000s as opposed to 17 and 40 more in the 2010s as opposed to 13 a 7 datapoint regression STILL does not quite show a significant trend (p < .052). There is just no power.

    Basically, by aggregating the way he did, he made it virtually certain that there'd be "no trend" unless the planet were actually on fire as opposed to "merely" heating up.

    3rd (and related to the above point): While I have no problem with physicists acting as physicists, over the decades I have found this is exactly the sort of statistical error that you can expect out of one when they wander–often more than a bit arrogantly as here–out of their area of expertise.

    xkcd, of course, noted this years ago very well: https://xkcd.com/793/.

  13. Did I hear the sound of numbers being crunched? I think I did! Thank you

  14. Andrew Brown

    Thank you Tamino and good to see a new post. Your posts are always interesting and educational. Donation made as well.

  15. In my opinion, what Sardeshmukh, Compo, and Penland demonstrate at J.Clim. 28(23), 2015 is (a) they appear not to understand some of the basic assumptions underlying modeling data with distributions, especially the role of inter-sample dependence, and (b) that if an exotic distribution fits a set of data better than a competitor that gives no confidence at all that the exotic distribution is any truer an explanation of the phenomenon producing the data. The “(b)” part is simply they don’t seem to get the concept of overfitting and its dangers.

    For these reasons much modern data analysis eschews use of statistical distributions at all, except as convenient intellectual crutches, and relies upon techniques which are based upon the data set itself or sets of data sets. These techniques range from quantile regression as, for instance,

    McKinnon, Karen A., Andrew Rhines, Martin P. Tingley, and Peter Huybers. “The changing shape of Northern Hemisphere summer temperature distributions.” Journal of Geophysical Research: Atmospheres 121, no. 15 (2016): 8849-8868.

    employ, bootstrapping (whether of dependent data or not), and cross-validation. Y’don’t necessarily need a distribution to assess tail risks. Many resampling methods can get that, and they are proper results as long as dependencies are captures.

    Accordingly Curry’s claims about all this are mostly undone by the weakness of the papers she cites, although I agree Curry apparently did not understand McKinnon, et al or what quantile regression is about.

    A good introduction to some of these issues is offered in

    Gilleland, Eric, Richard W. Katz, and Philippe Naveau. “Quantifying the risk of extreme events under climate change.” Chance 30, no. 4 (2017): 30-36.

    I illustrated using resampling methods to estimate the chance of a stock closing at the same price it opened and reported it here.

  16. Manuel Moe Garcia

    Good to see you back. Donated.

  17. Susan Anderson

    @John Garland. AFAIK and FWIW, Cliff Mass is not a physicist. OTOH, you are probably right (given the evidence, and this from me, a layperson) that he lacks statistical skill, given the way he is skewered by people who actually know the subject. His specialty is meteorology and he appears to have limited capability to take in information outside his bias.

  18. John Garland

    I had read his training and history as physics–it’s much like Lindzen’s–but be that as it may, I’ll extend the same observation to more applied areas and even to engineers. (And first year philosophy students!)

    • Susan Anderson

      Thanks for clarifying. I didn’t delve deep into his history to look at “training” but I did look at his degrees and job history. Being compared to Lindzen (who like many other deniers is also a cigarette smoker, irrelevant as that may be (obviously, I think it isn’t)) is not a compliment. My father (PWA) had a few choice things to say about Freeman Dyson on climate, while admiring his other work immensely!

  19. There’s the book, but Koutsoyiannis also has four papers that inform:

    Koutsoyiannis, Demetris. “The Hurst phenomenon and fractional Gaussian noise made easy.” Hydrological Sciences Journal 47, no. 4 (2002): 573-595.

    Koutsoyiannis, Demetris. “Hurst‐Kolmogorov Dynamics and Uncertainty 1.” JAWRA Journal of the American Water Resources Association 47, no. 3 (2011): 481-495.

    Koutsoyiannis, Demetris. “Climate change, the Hurst phenomenon, and hydrological statistics.” Hydrological Sciences Journal 48, no. 1 (2003): 3-24.

    Tyralis, Hristos, and Demetris Koutsoyiannis. “A Bayesian statistical model for deriving the predictive distribution of hydroclimatic variables.” Climate dynamics 42, no. 11-12 (2014*): 2867-2883.

    Koutsoyiannis (2011) squarely addresses the question of stationarity versus non-stationarity, and I consider his treatment definitive. However, Koutsoyiannis (2003, 2014) addresses the present discussion, which I urge study. His abstract reads in part:

    It is shown that hydrological statistics, the branch of hydrology that deals with uncertainty, in its current state is not consistent with the varying character of climate. Typical statistics used in hydrology such as means, variances, cross- and autocorrelations and Hurst coefficients, and the variability thereof, are revisited under thehypothesis of a varying climate following a simple scaling law, and new estimators are studied which, in many cases, differ dramatically from the classical ones. The new statistical framework is applied to real-world examples for typical tasks such as estimation and hypothesis testing where, again, the results depart significantly from those of the classical statistics.

    Thanks to @Robert I. Ellison for pointing Koutsoyiannis out! I look forward to a read.

    * Google Scholar cites this as 2014, but copies of the paper mark it as 2013.

  20. Welcome back! Your thorough analysis is always welcome and often referenced.

  21. BTW, the link I gave above for “Warming Slowdown? (Part 1)” was not the best. The link for it linked here is better. Also, see here for part 2 of 2.

    Three final comments about Hurst-Komolgorov modeling:

    (1) While I’m sympathetic to the approach, it won’t do until it has a Bayesian basis. Such a basis needs to have a hierarchical structure where knowledge can be built into hyperpriors. The H-K mechanism ought to supply the likelihood density. That’s all not been done yet.

    (2) Because, as people pushing it say, there might be LTP for H-K with respect to climate, and it doesn’t decay after 60 some odd points does not mean there is not finally somewhere a sill where it drops out.

    (3) Related to “(2)” the determination of where the sill is is a problem of statistical inference as well. Given the paucity of information rolled into these analysis, the uncertainty in that edge’s placement is probably pretty substantial. So, just because LTP might exist, until somebody can show that it’s hundreds of years long versus ten years, it’s not really a helpful observation.

    • My impatience with all this is in large measure due to the refusal to incorporate recent local evidence of extreme weather phenomena into their model, as inconvenient as it is, and calculating their posterior distributions for parameters of interest. Indeed, the proponents like Dr Ellison, have no means of incorporating recent specific data into their models and updating, rather than a Bayesian updating scheme.

      Pursue your whims, the evidence is there. We need to evaluate the acceptability of the LTP idea with respect to recent data. “Illusion and fantasy” as Prof Noam Chomsky suggests.

  22. So very glad to see you back!

  23. WELCOME BACK, TAMINO!

  24. Tamino is back – awesome!

    Your analyses are welcome and always fascinating reading. I was frustrated with respect to the NW heat wave by Cliff Mass’s superficial and misleading analysis. Cliff would have got to the first time series plot in this post, waved his hand and said “No trend!” – no pesky calculations required. You can see Cliff’s true leanings in the comment threads. Crazy denialist posts – no problem, scientific criticisms of his work – either not posted or rebutted.

  25. I reside in southwestern B.C., and the heat wave/’dome’ left me feeling I’d never again complain about the weather being too cold. …

    Clearly there has been discouragingly insufficient political courage and will to properly act upon the cause-and-effect of manmade global warming and climate change. ‘Liberals’ and ‘conservatives’ are overly preoccupied with vociferously criticizing one another for their politics and beliefs thus diverting attention away from the planet’s greatest polluters, where it should and needs to be sharply focused. (Albeit, it seems to be the ‘conservatives’ who do not mind polluting the planet most liberally.)

    But there’s still some hope for spaceship Earth and therefor humankind due to environmentally conscious and active young people, especially those who are approaching/reaching voting age. In contrast, the dinosaur electorate who have been voting into high office consecutive mass-pollution promoting or complicit/complacent governments for decades are gradually dying off and making way for voters who fully support a healthy Earth thus populace.

  26. If N-S is so chaotic, why can Neural Networks find reasonably short term solutions in GFDs and other applications? If N-S were substantially chaotic, they couldn’t.

    Another perspective: At the MIT Lorenz-Charney Symposium in 2018, at least two very prominent voices in the GFD community rose in answer to a question about applicability of AI and NNs to these problems, and these well known experts offered opinions that they would be of no use whatsoever. However …

    Qi, Di, and Andrew J. Majda. “Using machine learning to predict extreme events in complex systems.” Proceedings of the National Academy of Sciences 117, no. 1 (2020): 52-59.

    Rasp, Stephan, Michael S. Pritchard, and Pierre Gentine. “Deep learning to represent subgrid processes in climate models.” Proceedings of the National Academy of Sciences 115, no. 39 (2018): 9684-9689.

    Maulik, Romit, Bethany Lusch, and Prasanna Balaprakash. “Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders.” Physics of Fluids 33, no. 3 (2021): 037106.

    Balaji, V. “Climbing down Charney’s ladder: machine learning and the post-Dennard era of computational climate science.” Philosophical Transactions of the Royal Society A 379, no. 2194 (2021): 20200085.

    Kochkov, Dmitrii, Jamie A. Smith, Ayya Alieva, Qing Wang, Michael P. Brenner, and Stephan Hoyer. “Machine learning–accelerated computational fluid dynamics.” Proceedings of the National Academy of Sciences 118, no. 21 (2021).

    Fonda, Enrico, Ambrish Pandey, Jörg Schumacher, and Katepalli R. Sreenivasan. “Deep learning in turbulent convection networks.” Proceedings of the National Academy of Sciences 116, no. 18 (2019): 8667-8672.

    If nothing else, the successes in these papers suggest there’s a lot of structure that can be exploited in N-S.