The Value of Data

On a cold January morning in 1986, the space shuttle Challenger lifted off its launch pad at the Kennedy Space Center in Florida. Morale was high, especially as the Challenger flight was to inaugurate the teacher-in-space program with astronaut/high school teacher Christa MacAuliffe in its crew. Alas, 73 seconds into the flight the shuttle disintegrated, destroying the spacecraft and killing all the astronauts on board. The cause of the accident was a leak of hot gas from one of the solid rocket boosters. The leak occured because of the failure of rubber “O-rings” which were supposed to seal the joints between rocket sections, and part of the reason they failed is that the temperature was so cold at the time of the launch — the O-ring material becomes more stiff at low temperature so it’s less likely to make a proper seal.


The ambient temperature at launch was near 31F (about -1C), the coldest shuttle launch yet attempted by a wide margin (the next-coldest launch on record was at 53F, more than 20F warmer). The low temperature was of great concern to engineers from Morton Thiokol, the contractor who built and maintained the shuttle’s solid rocket boosters. At a teleconference the night before the launch with engineers and managers from Thiokol and from NASA’s Kennedy and Marshall Space Flight Centers, several of the engineers — most notably Thiokol’s Roger Boisjoly — expressed concern about the effect of the temperature on the resilience of the O-rings.

Thiokol engineers argued that if the O-rings were colder than 53F (12C) they didn’t have enough data to determine whether the joint would seal properly. The O-rings were a “Criticality 1” component, meaning that their failure would destroy the Orbiter and its crew. As the Report of the Presidential Commission on the Space Shuttle Challenger Accident (the Rogers Report) makes clear, this alone made the engineers’ concerns sufficient reason to cancel the launch.

In fact there was enough data available to discern the connection between temperature and O-ring damage, but when the data were examined, those involved made a classic but very natural mistake. They looked only at data for flights on which O-ring damage had been observed, ignoring all data from flights with no observed O-ring damage. In volume 1 of the Rogers Report it states:


The record of the fateful series of NASA and Thiokol meetings, telephone conferences, notes, and facsimile transmissions on January 27th, the night before the launch of flight 51-L, shows that only limited consideration was given to the past history of O-ring damage in terms of temperature. The managers compared as a function of temperature the flights for which thermal distress of O-rings had been observed — not the frequency of occurrence based on all flights (Figure 6). In such a comparison, there is nothing irregular in the distribution of O-ring “distress” over the spectrum of joint temperatures at launch between 53 degrees Fahrenheit and 75 degrees Fahrenheit.

Their figure 6 shows the data they considered, and indeed there’s no obvious relationship between temperature and O-ring damage.

Unfortunately this graph omits all data for flights on which there was no O-ring damage. As the Rogers Report goes on to say:


When the entire history of flight experience is considered, including “normal” flights with no erosion or blow-by, the comparison is substantially different (Figure 7). This comparison of flight history indicates that only three incidents of O-ring thermal distress occurred out of twenty flights with O-ring temperatures at 66 degrees Fahrenheit or above, whereas, all four flights with O-ring temperatures at 63 degrees Fahrenheit or below experienced O-ring thermal distress.

Consideration of the entire launch temperature history indicates that the probability of O-ring distress is increased to almost a certainty if the temperature of the joint is less than 65.

Their figure 7 shows quite clearly that there is a relationship between temperature and O-ring damage on previous flights. The situation is well summarized in item 6 of the “conclusions” section of chapter 6 of the Rogers Report:


6. A careful analysis of the flight history of O-ring performance would have revealed the correlation of O-ring damage and low temperature. Neither NASA nor Thiokol carried out such an analysis; consequently, they were unprepared to properly evaluate the risks of launching the 51-L mission in conditions more extreme than they had encountered before.

Perhaps the saddest part of the story is that yes, the data did exist, and were on record and available for study, clearly to demonstrate the extreme danger of launching the shuttle at such cold temperature. They were simply not analyzed properly.

This failure wasn’t about advanced math. It had nothing to do with a lack of sophisticated, complex mathematical techniques to squeeze the last drop of information from the available numbers. It was a simple mistake of leaving out relevant data, which happened to constitute most of the data, including the part that made the important insight so obvious that you really don’t need fancy math to get it.

The science which deals with the analysis, interpretation, and presentation of data is called statistics. Done well, it can reveal keen insights and guide us to beneficial choices. Done poorly it can mislead us, contributing to unwise choices about everything from auto safety to the school budget to the best lineup for our baseball team, to approving the launch of a space shuttle.

It’s all too easy to mislead people, including yourself (especially yourself!) with badly-done statistics. It’s not so easy to do it right, so that the data can speak for themselves and make the important message plain. But it can be done, and it may not be easy but in many cases it’s not too difficult either. It really is well within the ability of most people to apply statistics correctly, in order to make far better use of the data they have available. You may never reach the heights of statistical sophistication — few do — but you can get the basics right, and that’s most of the battle. In fact that’s almost all of it. Most truly valuable insights are not far beneath the surface, you don’t need a jackhammer or a surgeon’s scalpel to extract them, all you need is a plain old shovel and the knowledge how to use it. And let’s face it: learning how to use a shovel is not that hard.

No amount of mathematical sophistication can protect you from overlooking something simple, making the kind of mistake that can lead to disaster. Fortunately, few such decisions are a matter of life and death.

71 responses to “The Value of Data

  1. Very good!

    I’m reminded of a story–authentic or not, I don’t know– telling of a mathematician during World War II who was asked by the British Royal Air Force to analyze damaged bombers with an eye to maximizing the effectiveness of aircraft armor.

    They expected him to do a frequency analysis of visible damage. Instead, he looked for undamaged areas, reasoning that if any existed in the large sample at hand, it would most likely be because damage in those areas meant that the plane never made it back to England. . .

    You can use “selection effects” to your advantage, or you can be victimized by your failure to account for them.

  2. Apparently the anecdote is essentially accurate, but a bit garbled:

    http://www.jstor.org/pss/2288257

    H/t to Hank Roberts, for reminding us always to check. . .

  3. Bravo!!

  4. Few of those decisions may be a matter of life and death, but the ones that are matters of life and death too often seem to be under time stress. The meetings happened the night before the launch…. I like to think that analytical mistakes aren’t criminal, and admitting a mistake is laudable, but what about cases where a mistake has such grave consequences? All I can say is that in those cases you have to be better prepared. They should have been examining this possibility when they got the long range weather forecast….

    • Indeed. Having read both the Rogers Report and “The Challenger Launch Decision” by Diana Vaughan (quite a while ago, though) I’d suggest that the mistake was not having the discussion well before even the possibility of a long-range forecast.

      The Morton Thiokol engineers were well aware of the problem and pretty clear amongst themselves that low temperatures were a big contribution to the cause. They were working on a re-design of the rings but were hampered by bureaucratic and budget problems. Still, there was an implicit assumption that there wouldn’t be any more cold launches before the problem was fixed. That got swept under the carpet by the delays, I guess.

      The problem, as I understand it, was no so much in the analysis but in the presentation. The engineers at MTK understood the problem but by failing to include the no-problem warm launches in their graph they failed to convince their own and NASA’s management.

      http://www.asktog.com/books/challengerExerpt.html
      http://en.wikipedia.org/wiki/Space_Shuttle_Challenger_disaster#Use_as_case_study

      My own opinion is that the implicit assumption that there would be no more cold launches before the problem should have been made explicit in the form of a launch commit criterion regarding the o-ring temperatures. Perhaps the longer-term discussion leading up to the imposition of that LCC would have resulted in a better presentation than the one that was put together in a hurry the day before a launch.

      • While I agree with the need for better data presentation during those crucial 24 hours prior to launch, I’m convinced it wouldn’t have made a difference in the case of Challenger. The reasons are as follows:

        1) It was well known that the o-ring/joint design was flawed. The first indication of the flawed design dates back to 1977. A memo, written by Leon Ray and signed by John Miller, MSFC Solid Rocket Motor branch chief, highlighted the problems. The willingness of the verification process to overlook design flaws is indicative of the “go-fever” that had infected the program.

        2) The qualification temperature range of the motor was 40-90°F. The lack of mention of this qualification limit in the decision making process shows a critical lack of understanding of the system. If this had actually been enforced, Challenger would never have launched.

        3) The Rogers commission clearly shows managerial pressures to get launches off. At the time, the STS program was intending to increase launch rate up to 24 launches a year (we now know that was impossible, but then they were still selling that operational rate). Any delay in launch would have major ramifications on future launches and on the STS project as a whole.

        4) The STS 41-B flight exhibited significant erosion, and was brought before the Level 1 FRR panel (Mar 1984). The result of that was that this Category 1 issue was relegated to a “Programmatic Action Item”, and MSFC was to conduct a formal review of the SRB design. MSFC asked Thiokol for a formal review of the booster field joint and nozzle joint sealing procedures. Thiokol’s response, in May of that year, was merely a proposal of what to do to investigate the problem. The actual final response wasn’t delivered until Aug 1985. As a Category 1 problem, this should not have been acceptable. Instead, the FRR decided to just watch the issue and continue let shuttles fly. Specifically, after the erosion was found on the 51-B flight, a launch constraint was put on the boosters and for each flight, the Shuttle Project Manager Mulloy waived it. If a temperature flight constraint had been put on 51-L, I fully expect it would also have been waived mainly due to launch rate pressures.

        5) One of the more interesting technical issues found by the Rogers Commission is that the actual inspection method used to check the seals was in fact a contributing cause of the hot gas blow by. From the Rogers Commission: “Moreover, the nozzle O-ring history of problems is similar. The nozzle joint leak check was changed from 50 psi to 100 psi before STS-9 launched in November 1983. After this change, the incidence of O-ring anomalies in the nozzle joint increased from 12 percent to 56 percent of all Shuttle flights. The nozzle pressure was increased to 200 psi for mission 51-D in April, 1985, and 51-G in June, 1985, and all subsequent missions. Following the implementation of the 200 psi check on the nozzle, 88 percent of all flights experienced erosion or blow-by.” This alone could have blunted the “temperature relationship” argument and would have given management enough technical cover to justify the launch, and given the launch rate pressures they were under, they’d have gladly accepted any excuse to have the launch.

        All of this information is available in the Rogers commission report.

        As I said at the top of this post, while having the plot showing the direct relationship between temperature and # of incidents would have been nice, I firmly believe that the launch pressures that management was under, and the faulty confidence of “it never has been a loss-of-crew problem before” would have led management to make the exact same decision they made.

      • I have to disagree with you, Ed. While everyone now knows the problems that at the time existed with the O-rings, I have no confidence that even showing such a plot would have made a difference in the decision to launch. For supporting evidence, here are some points made in the Rogers Commission:

        1) The problems with the booster joint design were well-known. First indication of technical issues with it are documented as far back as 1977. Thiokol disagreed with the technical deficiency of the joints, and did little to push a redesign effort. Only upon the increased blow-by during flights 41C & 51B did Thiokol institute an improvement plan. Unfortunately, this plan took 18 months or so to get started (Thiokol was tasked to develop the plan in April 1984 and the plan was delivered in Aug 1985).

        2) The acceptance testing procedures were known to cause blow-by of the putty used to help seal the joints. The original procedure called for a 50 psi pressure across the o-rings, but was deemed insufficient in finding leaks. The pressure was increased to 100 psia, and then 200 psia. According to the Rogers Commission, of those tested at 50 or 100 psi, only 1 exhibited blow-by in flight (9 flights). Once the pressure was increased to 200 psi, over half the missions (17 flights) expereinced blow-by and/or erosion. And since none of those flights had resulted in “loss-of-crew”, the criticality of a blow-by event would be diminshed. This trend, if presented to counter the temperature argument, would have been very effective in giving management the necessary cover to continue the launch.

        3) Management was under severe pressure to meet launch rates. The original Shuttle program promised over 80 flights a year. The approved plan promised 55 flights a year. By the time STS-51L came around, NASA was still claiming they could get 24 flights a year, even though they had never gotten more than 9 flights and even at that rate, the logistics chain was showing signs of breaking down. This schedule pressure would have pushed management to waive and/or dismiss any and all issues.

        4) Point 4 is also strengthened when you look at the history of the SRB joint failures, their supposed criticality and management’s response to them. As stated in point 1, the joint design was questioned as early as 1977. At program inception, the SRB joint was given a Criticality 1-R rating, meaning that any o-ring seal failure could result in the loss of crew. This drives redundancy into the system by supposedly ensuring any Crit 1R issue has some sort of redundancy designed into it to ensure that it doesn’t fail. That was eventually downgraded to Crit 1, meaning that some conditions exist that will result in a lack of redundancy and that such cases are not unexpected nor loss of crew conditions. The Rogers Commission testimony of what those that signed off on this change show that there was clear confusion as to the exact dangers of making this change. Needless to say, this change doesn’t require nearly as many signatures to get waived. And after flight 51B

        5) Maybe because of the downgrading to Crit 1, or maybe because of budget pressures, or maybe because of go-fever/schedule pressures, the necessary effort needed to correct the well-known problems with the SRB joint never materialized until after Challenger. After the blow-by incident on STS 41B, MSFC tasked Thiokol to study the issue (Apr 1984). Thiokol then proposed a plan to study the issue (May 1984) and finally presented the results (Aug 1985). Add to this Thiokol’s insistence that the joint design was sound, and clearly there was very little pressure to find a solution to this “supposed” problem. Then, due to the serious blow-by events on 51-B, a launch constraint was issued in July 1985. But this didn’t matter. Even after the launch constraint was imposed, Project manager Mulloy waived it for each flight.

        6) The design environment limits on the SRB were from 40°F to 90°F. That the booster was outside of that limit at time of flight should have resulted in an immediate scrub of the launch due to being outside of the qualification box. Instead, this limit was dutifully ignored by claiming that the temperature limit was a propellant core temperature limit and not a case limit. The design documents do not differentiate between the two measurement techniques and management should have erred on the side of caution and scrubbed the flight. They didn’t.

        In summary, NASA was operating in a mode of “Prove to me it WILL fail”, and since no loss-of-crew failure had yet occurred, there was no way for engineers at either MSFC or Thiokol to prove their case. NASA just wasn’t going to listen.

  5. Wartime & post-war Operational Research is stacked full with anecdotes of how to use or how not to use data. One I recall occurred (allegedly) at a conference where some eager researchers preseted a paper proving smoking caused cancer. A while later the conference heard from an old hand called Rus Ackoff who has quickly used the same data and method to ‘prove’ smoking prevented cholera. A very effective method of rebutal.

  6. Gavin's Pussycat

    Kevin, that story is familiar but I seem to remember it attributed to Patrick Blackett.

    • Gavin's Pussycat

      this seems to confim my hunch:

      British experimental physicist Patrick Blackett was another founder of operations research. During WWII, he served as Director of Operational Research under the Admiralty (naval command). One of his teams made recommendations on armor placement on RAF aircraft. The common wisdom was to place heavier armor on the parts of the plane where returning missions had been most shot up. The idea was to armor planes where they were most exposed to enemy artillery. Blackett’s team recognized that the data was statistically biased toward planes that survived. The planes that had not survived were the ones of interest. To correct for the bias, the team reasoned that if a part of plane could be shot without forcing the plane down, that part needed no additional armor. Instead, the places in common between returning planes that had not been shot were likely the places where non-returning planes HAD been shot, and therefore where additional armor was needed.

  7. Edward Tufte has a whole piece on the poor presentation of partial data regarding the Challenger launch. He also has a piece on why the presentation on (what turned out to be) the Columbia accident was perverted by the use of PowerPoint.

    I very strongly recommend both essays.

  8. I’m pretty sure this story features prominently in Tufte’s book as an example of how not to present data (The original graph is even worse than the Rogers report figure 6 given by Tamino).

    • Tom Fahsbender

      And Tufte’s adjusted graph (like the one above demonstrating the certainty of O-ring damage below a certain point) includes temperatures at the low end down to the 30-degree launch temp, showing the scary absence of data over a 30-degree span. If any of you aren’t familiar with his books, they’re well worth a look.

  9. Extreme pressure to proceed overrode nearly everyone’s gut feeling. Now gut feelings are not always right , but they very often are. Where there is a discrepancy we need to step back and look at why our gut might be wrong or right and why the statistics might be wrong or right.

    A short period of introspection, without pressure, can yield great insight. The decision making process for the Challenger launch pretty much ensured that introspection did not occur. Management wanted that launch.

    Management wants Business As Usual and has a gut feeling that global warming is not a problem..

  10. The attribution to Blackett is here, among other places:

    http://www.ehow.com/about_5340571_origin-operations-research.html

    I like it, because this is pretty much just the way I presented it.

    But of course, Wald could have done similar work in the US; it need not be exclusively one or the other. “When it’s time to railroad” and all that.

    • The UK version of bommer damage assessment is the one I heard although Blackett was a boss not a humble Operational Researcher. As the link says, it was one of his teams.
      Then bosses did play their part. The human aspect of decision-making was usually bigger than the mathmatical proofs.
      Perhaps the biggest wartime OR success story is mentioned in the link – the depth-charge decision. OR identified that 100 ft was too deep, not the best depth to sink crash-dived U-boats but the RAF Coastal Command said ‘rubbish’. Tbe decision went to Winston Churchill who allowed a one-week trial of the shallower depth. The OR guys said a week wouldn’t be statistically significant (sounds familiar) but that’s all they got. Luckily two subs was sunk that week (The average was one a week. The gain from the new depth setting about 40%.). The result of the trial’s success was a very significant increase in U-boat sinking over the rest of the war.

  11. OT, someone suggested you might want to play a bit with Liu2011, which is being hawked in Phantasia these days: 10.1007/s11434-011-4713-7

    »Amplitudes, rates, periodicities and causes of temperature variations in the past 2485 years and future trends over the central-eastern Tibetan Plateau«

    p.

  12. “It’s all too easy to mislead people, including yourself (especially yourself!) with badly-done statistics.” Sadly only too true, “no warming since 2002” etc.

  13. No sophistication was needed as a simple scatter plot with all of the data would have told the story and possibly been enough to warn of the danger.

    The analogy in describing climate is simply using a long term data set rather than getting excited about the short term.. Anyone with even a somewhat enough experience in working with data to understand effects of noise and random variability can see that very short term ‘trends” are usually not significant.

    • There’s a more direct analogy too with the “launch” of our ghg emissions taking place outside of a previous “tried and tested” range, with experts warning about “O-Rings” in climate. Does the launch get canceled?

      It’s striking how different the approach to risk and uncertainty is.

      Also the experts in the shuttle case weren’t attacked by “skeptic” bloggers for being “alarmist” with demands for them to “prove” their “baseless” “O-Ring” scary stories. There were no blogs around to spout reassuring rubbish like “Temperature in Florida were much colder during the last ice age and no shuttles crashed back then.”

  14. re. the design flaws in the O rings.

    Memos seem like a a nebulous way to deal with this!
    I wonder what test-data was on file on the material properties of the elastomer used to make the O rings?

    Richard Feynman was able prove by simple experiment, just using a glass of iced-water, that the O rings lost their elasticity at around freezing point.
    This phenomenon is known as the “rubber-glass transition”, after which the rubber is unable to form a proper seal.

    The solution adopted after to the Challenger disaster was to introduce heating elements into the seal area. Apparently there are now compounds available that don’t go through this transition until much lower temperatures.

    Either way, the poor specification of a safety critical component was just as much an issue as the decision to launch.

  15. There’s something worse than willful ignorance/misuse of available data – that’s having data, but lacking any intuitive understanding of what it means. Recall the Columbia breakup disaster, due to foam impacts: Subsequent testing revealed that it was indeed possible for pieces of foam to break wing joints. Scott Hubbard was quoted with one of the all-time genius statements: “That’s when it came home to me what 1/2 mv^2 means.” Apparently he missed the day they covered that in high school physics. Of course, there were pre-flight foam impact tests and even a program to analyze that data. If I recall correctly, nobody thought to run a test on a chunk of foam as large as the ones that did the actual catastrophic damage.

    • Igor Samoylenko

      From the referenced newspaper article:

      Mr. Hubbard said the experiment showed that ”people’s intuitive sense of physics is sometimes way off.”

      No shit!

  16. “Apparently he missed the day they covered that in high school physics.”

    C’mon, be fair–clearly he knew the equation; what he meant was that it was the first time that that knowledge–and specifically its implications for the Shuttle–struck home with real emotional impact.

    Would that “3C” warming were to attain similar impact for many more people.

  17. Basically, humans suck at assessing risk. The risks our hominid ancestors had to confront tended to manifest suddenly (predators, snakes, fire…), while slowly evolving risks usually didn’t pose a threat to evolutionary success. Risks to our ancestors also tended to behave consistently–a snake or a predator were nearly always threats, fire always burned…

    As a result of this, humans tend to underestimate the risks posed by objects, processes and occurences that are familiar, and we grossly overestimate those which are unfamiliar or in some way spectacular.

    I actually met one of the Morton Thiokol engineers–not one engineer signed the waiver to allow the launch. The engineering manager was pressured to do so–to the effect of: “Take off your engineering hat and put on your manager’s hat.” He did, and that formed the basis of the decision.

    So where was the pressure coming from? Well, President Reagan had a reference to a teacher in space in his State of the Union speech to be delivered the next night. There are records of several calls from Blair House (Vice presidential mansion) to Marshall Space Flight Center prior to the decision. And of course, the extrapolation people tend to make is…”Well, we got away with it last time…” That’s the thing about reliability arguments that run along those lines–you tend to hear only the stories where things worked out.

    The moral of this story is that you have to follow validated risk management procedures and analyze ALL the data with scientific rigor. That and when in doubt, better to err on the side of caution.

    • Igor Samoylenko

      Good points about the evolutionary context behind our poor intuitive understanding of risk assessment, Ray. Add to this that we do not intuitively engage in scientific reasoning (it has to be learned) and have a poor intuitive grasp of modern science (quoting from Pinker, 2010):

      “[People’s] intuitive physics corresponds to the medieval theory of impetus rather than to Newtonian mechanics (to say nothing of relativity or quantum
      theory) (54). Their intuitive biology consists of creationism, not evolution, of essentialism, not population genetics, and of vitalism, not mechanistic physiology (55). Their intuitive psychology is mindbody dualism, not neurobiological reductionism (56).”

      It is no wonder we are struggling to deal with the extremely serious but perceived to be remote threats from AGW and look for any excuse to fool ourselves with comforting lies spread by the climate “sceptics”…

  18. Philippe Chantreau

    “lacking any intuitive understanding of what it means”

    This may be one of the biggest problems when trying to convey amy scientific kknowledge to the masses. I’m not sure why, but it seems that “instictive” quantitative thinking has all but disappeard from the general population. And I’m not talking about advanced statistics or anything beyond what is described by arithmetics here. Perhaps the way they teach maths has a role.

    As Muon can confirm, SkS had people who could not be convinced that waste heat is too insignificant by orders of magnitude to have a noticeable effect, even after being shown the numbers.

    I have started doing a little experiement on occasions with understanding of basic physics: I ask people to imagine a railway car of a bullet train, travelling at various speeds from 50 to 200 mph and changing speed, with a pendulum suspended from the ceiling of the car. Then I ask them what the pendulum would do at a constant speed of x, or when changing speed one way or another. The answers I got so far, even from smart, technically enclined people, have been surprising.

    • Philippe: I see the lack of instinctive or intuitive reasoning skills every day among my reasonably bright physics students, who don’t seem to get it that they walk at meters per second, drive at 10s of meters per second, but satellites move at kms per second. It’s not just provincial American anti-metricism; it seems rooted in a need to use a calculator to multiply – I kid you not – by 10. With a slide rule, you had to think; with a TI-83, its just button-pushing – and whatever they see must be correct to 18 decimals.

      [Response: A mathematician, an engineer, and a statistician are applying for the same job. One by one, they’re brought into a room and asked a single question: What is two plus two? The mathematician says “Four.” The engineer whips out his calculator, punches buttons, and says “4.0000000000 to ten decimal places.” The statistician says “What do you want it to be?”]

      • Igor Samoylenko

        I am not sure the mathematician’s answer will be that straight forward actually. :-)

        I remember when I studied applied maths at the Uni, everything became highly abstract very quickly and we basically stopped thinking in simple numbers (or mere “constants” as we called them in a derogative way) by year 2. Here is good joke I can relate to (from here):

        “A mathematician and his best friend, an engineer, attend a public lecture on geometry in thirteen-dimensional space.
        “How did you like it?” the mathematician wants to know after the talk.
        “My head’s spinning”, the engineer confesses. “How can you develop any intuition for thirteen-dimensional space?”
        “Well, it’s not even difficult. All I do is visualize the situation in arbitrary N-dimensional space and then set N = 13.””

      • In my former life in the oil business, we told the same joke about a geologist, an engineer and a geophysicist. The pickers were always the smart ones, but somehow you always wound up working for an Aggie engineer (Aggie = Texas A and M).

      • Igor, As someone who dated an mathematician when I was in grad school, I can appreciate that joke.

    • I teach/tutor primary and secondary students rather than tertiary.

      And it’s all down to poor curriculum design and much, much less practice in straightforward manipulation of simple numbers right from the start. The number of high school students who show blank incomprehension when faced with complex topics like multiply by 100 or what is the value of 9 in the number 6941 is deeply depressing. Decimals, percentages and ratios may as well be brain surgery for many such students – and they’re not intellectually inadequate.

      Most depressing of all are the ones who are obviously extremely intelligent – and they’ve survived to year 10 on brain power alone. Trying to make tuition in extremely basic skills (fraction addition, anyone?) come together in a few months to allow them to progress to further education commensurate with their innate ability is a pretty well insurmountable task.
      //rant over, soapbox under arm, walks away

  19. Why were ANY parts with “Criticality 1” allowed. If a fatal accident happened because of something like that in my industry, it might very well wind up in jail terms, and certainly firings.

  20. “A new software program, described in the latest issue of Science, is designed to find the patterns in data that scientists don’t know to look for.

    “David Reshef, one of the scientists behind MINE, as the program is called, explains, “Standard methods will see one pattern as signal and others as noise. There can potentially be a variety of different types of relationships in a given data set. What’s exciting about our method is that it looks for any type of clear structure within the data, attempting to find all of them. … This ability to search for patterns in an equitable way offers tremendous exploratory potential in terms of searching for patterns without having to know ahead of time what to search for.” MINE compares different possible relationships (including linear, exponential, and periodic) and returns those that are strongest.

    “On MINE’s website, the program is available for download.”

    http://www.exploredata.net/

    quote from:
    http://www.theatlantic.com/technology/archive/2011/12/connecting-the-dots-finding-patterns-in-large-piles-of-numbers/250126/

  21. My daughter studied Mechanical and Space Engineering. Most of the cautionary “this is how NOT to do it” case studies were from NASA.

  22. The Challenger launch issue is not so different from global warming. As pointed out above, the engineer was asked to put on his management hat to make the decision. So he had to give more weight to the consequence of a failure to launch, and less the consequences of a failure during launch.

    It is that way with AGW. Some see the huge risks of rapid warming, but others put on their “management” hat and see the consequences of the necessary action, and decide that these consequences outweigh the risks.

    I give no weight to the idea that they genuinely believe the risks are not there.

    • It is surely very different. In the case of the Challenger, you had a risk of catastrophic failure which would kill the crew and blow up an expensive vehicle. It was certain that if the seals failed, this would happen. The risk was defined, limited, known. There was also two choices: to launch or not to launch.

      AGW is very different. There are a range of possible consequences, quite wide uncertainty bounds, a great many different possible courses of action. Those affected are just about the entire population of the planet.

      The medical comparison is much more appropriate. There was reasonably strong evidence suggested saturated fat consumption was linked to heart disease. It seemed plausible that substituting polyunsaturated fats would improve matters, but it turned out not to. It then seemed reasonable that cholesterol lowering drugs would help, and it did, but the death rate went up because of unexpected consequences. Now we are proposing, in some countries, to dose the entire male population over 50 with statins. It seems plausible. Oddly enough, as we got to this point, we discovered that adding cholesterol lowering compounds to the statin dose makes them less effective in terms of mortality, not more.

      Ray says that climate science has been around since, who knows, 1650. Medicine has been around since forever. It doesn’t matter how long its been around, what matters is the validity and plausibility of the conclusions.

      AGW at the moment seems to be a plausible hypothesis, with a well understood proposed mechanism, some evidence in its favor, and a mass of evidence that has no clear role and has yet to be integrated.

      One would expect, if moving rapidly to large scale remedial action in these circumstances, to encounter the law of unintended consequences rather forcefully.

      Its not clear that people are unwilling to lower carbon fuel consumption. They probably could be persuaded to do it, and might even like it if they did it, but the problem is, no-one is proposing the really radical social measures that it would take to get there, so its very hard to get people behind it.

      [Response: Your analogy is NOT apt. The case for dangerous global warming is more like the case for the connection between cigarette smoking and lung cancer — it’s dead certain. Your reference to polyunsaturated fats is just rationalization because you simply don’t want to accept the truth.

      As for the claim that there is evidence against the dangerous nature of man-made global warming, bullshit. The only “evidence” is trumped-up pseudoscience that has been refuted time and time again. You latch on to it not because it has any validity, but because you so desperately need an excuse not to believe.]

      • @ michel

        Here comes Michel with yet more hand-waving/dissembling:

        “Now we are proposing, in some countries, to dose the entire male population over 50 with statins.”

        and

        “we discovered that adding cholesterol lowering compounds to the statin dose makes them less effective in terms of mortality, not more.”

        Crap. You conflate the Jupiter (& related subset) trials with news reports. Crestor showed survival benefits in specific patient populations also having one or more of several other risk factors. The other bit you mention was from a subset analysis not powered to draw those conclusionary statements. Try actually internalizing the studies you claim to quote from.

        And then there was this bit:

        “AGW at the moment seems to be a plausible hypothesis”

        The physics of greenhouse gases are well-established. That fossil-fuel emissions can contribute to the warming of the globe (with other forcings being static) is also well-established. These understandings rise to the level of theory, not hypothesis. Per the National Academies (p 44-45 of the linked pdf):

        “Uncertainty in Scientific Knowledge

        From a philosophical perspective, science never proves anything—in the manner that mathematics or other formal logical systems prove things—because science is fundamentally based on observations. Any scientific theory is thus, in principle, subject to being refined or overturned by new observations. In practical terms, however, scientific uncertainties are not all the same. Some scientific conclusions or theories have been so thoroughly examined and tested, and supported by so many independent observations and results, that their likelihood of subsequently being found to be wrong is vanishingly
        small. Such conclusions and theories are then regarded as settled facts. This is the case for the conclusions that the Earth system is warming and that much of this warming is very likely due to human activities.”

        [emphasis added]

        You thus hand-wave a la Gilles.

      • Jebus, Michel, you don’t back down. You double down. OK, let me be clearer: Your medical analogy is just flat stupid. Medicine has hardly been practiced scientifically for a century. This is not epidemiology. It’s physics.

        Yes, there is uncertainty, but the consequences of climate change are already significant. Continuation of our current course will lead to severe consequences with near certainty. Quit arguing from ignorance and look at the fricking evidence.

      • Whenever somebody employs “the science is not settled” in an argument, I switch immediately to “I’m being conned” mode. Because no climate scientist of any significance ever said the science was settled. So I conclude they are saying “the science is not settled” because they are trying to bamboozle me into believing somebody of significance said it was.

        So we’re done, Michel. You’re on the same scrap heap as those emails from Nigeria.

      • JCH:

        Whenever somebody employs “the science is not settled” in an argument, I switch immediately to “I’m being conned” mode.

        Or lied to. Because “science is not settled” somehow is transformed into “there is no problem”, rather than “the problem may be a bit worse or a bit less worse than science suggests”. In other words, “the science is not settled” is transformed into “the science is settled and there’s no problem”.

      • And just how the bloody blazing hell is ceasing to use the atmosphere as a dump for combustion products–activity largely insignificant prior to the last couple of centuries–‘subject to the law of unintended consequences?’ Are we apt to find out that clean air is bad for us?

  23. I think people tend to underestimate just how difficult the mission of the shuttle as a “fully re-usable” spacecraft was.

    Consider that the shuttle had to reach its orbiting altitude and then acquire sufficient radial velocity to achieve stable orbit. It then had to shed all of that kinetic energy (several km per second) plus all the gravitational potential energy turned into kinetic energy (same order of magnitude) and actually land. It then had to be sufficiently intact that it could be maintained for reuse within a couple of months.

    The resulting thermal protection system was just one of the things that made the shuttle one of the most complicated machines ever built. The shuttle had over 10000 critical parts that could cause catastrophic failure if they failed at the wrong time. Even if you achieved six 9s of reliability with each of these, you’d still expect roughly 1 failure in 100 launches. The shuttle had 2 failures in ~135 launches. Ultimately, however the complexity of the program was its downfall. It was simply too expensive and difficult to maintain safety.

    The lessons of the Rogers Commission and of the CAIB certainly should include the dangers of hubris. However, they should also have included the fact that spaceflight is frickin’ hard. This is the most inhospitable environment humans have ever operated in. If we focus merely on the human failures that led to these accidents, we ignore the challenges that made such oversights much more likely and we doom ourselves to future disasters.

    • Concur. And the major design revisions for post-Shuttle spacecraft perhaps are models for mitigating AGW: Just do the biggest bang for the buck, obvious things, instead of waiting for / trying to find complex solutions:
      1. Put the spacecraft on top of the rocket so stuff falling off the rocket can’t hit it.
      2. Don’t use wings.

      • Of course the simplest and surest mitigation for AGW is to burn less fossil fuel, but that doesn’t seem to be a solution people are willing to consider.

    • I think inhospitable is a bit of an understatement. More like, nearly any failure=certain death.

  24. It iw well known in the safety equipment industry that the original solid fuel booster used asbestos putty behind the O-rings. The inability of the supplier to get worker’s comensation insurance for the laborers caused them to discontinue the product. This led to the Challenger disaster. It is also well known that the Columbia foam problems were caused by the EPA, which ruled that the foam could no longer use Freon gas as blowing agent, due to the ozone hole scare. The foam blown without fluorocompound blowing agent was vastly inferior.

    Our tax dollars at work…

    [Response: I think you need to read the Rogers report, and revise your beliefs.]

    • Gavin's Pussycat

      [Response: I think you need to read the Rogers report, and revise your beliefs.]

      That’s a rather lukewarm response to a guy on a blog insulting the memory of the dead for fun and profit… you can do better, T.

      I see the hand of Junkman Malloy behind this, who has also claimed that the World Trade Center would still be standing if not for the asbestos ban. These folks literally have no shame; shame needs to be beaten into them.

    • Otherwise respectfully, do you have a take on Nobel Laureate Feynman’s paragraph six here? http://www.ralentz.com/old/space/feynman-report.html

      • Not sure what you are asking for comment on. Gee, managers paint a rosy picture. I’m shocked. We still run into this. People try to ignore inconvenient data. We need science to tell us that ain’t science.

      • AS:

        Finally, if we are to replace standard numerical probability usage with engineering scientific (i.e. professional) judgment, why do we find such an enormous disparity between the right-wing Republican estimate and the judgment of the scientists? It would appear that, for whatever purpose, be it for internal or external consumption, the political leadership of the Republican party exaggerates the reliability of its ideology, to the point of fantasy.

        Sounds like Republican science-rejectionist politicians vs. scientists working in the field, to me.

  25. Our tax dollars at work…

    This phrase makes me think of all the tax dollars wasted on Michael Moon’s public school education …

  26. It is also well known that NASA faked the moon landings, Elvis is alive and well and living in New Mexico, and that cars could get 120 mpg if only Detroit hadn’t bought up and buried the patents ….

  27. The original putty did use asbestos;
    the original supplier did go out of business;
    the replacement was proprietary and unpredictable—that’s cited in the public report:
    ocw.mit.edu/courses/aeronautics-and-astronautics/16-891j-space-policy-seminar-spring-2003/readings/challengerlessons.pdf

    Blaming the workers’ compensation industry? Not in the report.
    Blaming EPA? Not in the report.
    Those are political spin, widely spread and thickly applied.

    You don’t have to know everything about something like that; you have to know it won’t fail under a known expected condition, for sure.

    Putting a sample in ice water and seeing it brittle enough to break was a simple check, easy to make.

  28. > simple … easy
    Wrong. Looking further, this is a very good paper on several different cases:

    http://www.scribd.com/doc/19287909/The-Golem-at-Large-What-You-Should-Know-About-Technology-Harry-Collins-and-Trevor-Pinch

    ” CONCLUSION
    … the prevailing story … is too simple. There were long-running disagreements and uncertainties about the joint but the engineering consensus by the time of the teleconference was that it was an acceptable risk…. . We are also now in a better position to evaluate another misconception – the one spread, perhaps inadvertently, by Richard Feynman: that NASA were ignorant of the effect of cold on O-rings. At the crucial teleconference this point was considered in some detail. NASA representatives had researched the problem extensively and talked about it directly with the O-ring manufacturers. They knew full-well that the O-ring resiliency would be impaired, but the effect was considered to be within their safety margins.”

  29. The RAAF accounting for the planes that didn’t come back reminds me of one of my own first insights on the importance of what you leave out. What I think of as ‘survivor error’.

    In Australia sharks are a bit of a national obsession (as things that can chomp you in half tend to be!) I recall in the 70’s various survivor accounts that so-and-so the surfer punched Mr Great White or Ms. Grey Nurse bang on the tip of the snout, and lives to tell the tale; therefore, the popular wisdom ran, you should always wallop our big-and-bitey friends firmly on the snout should they become too intimate.

    Not only was there no account given from other survivors who hadn’t pursued the Joe Frazier option, and no attempt made to prove that such a group is considerably smaller or barely exists, but it occurred to me that for all we knew 40 of the last 50 people who were eaten had also bravely biffed away…

    (Yes, I’m aware the Mythbusters found this belief ‘plausible’ for punching the gills. I’m talking about drawing big conclusions from a small amount of dramatic copy.)

  30. Carl Sagan reported being “played with” by a dolphin while swimming with one of Lilly’s subjects.

  31. Philippe Chantreau

    Marco I see nothing in that wiki that resembles a real attack in the wild. At most, playful behavior, but these occurrences seem to have been mostly indavertent. Captive animals are a different story.

  32. Michael, I guess a large part of what really gets me is that the actual physics is well established.

    We know the absorption spectra of carbon dioxide. We have are essentially able to ground this in the first principles of quantum mechanics. We are able to measure it in the lab. We understand Kirchoff’s law, the Stefan-Boltzman equation. We are able to satellite image carbon dioxide to within 2 ppm in the atmosphere because of its absorption spectra. We’ve predicted and measured the changes in upwelling and downwelling spectra. We know the Clausius-Clapeyron relation, and that water vapor partial pressure increases roughly as an exponential function of temperature. We know that water vapor is likewise a greenhouse gas. We’ve measured its increase and can see the change in the spectra. The models themselves are based upon well-established physics, from radiation transfer theory, mechanics, to fluid dynamics and thermodynamics.

    The models are incomplete in the sense that it is possible to model things in a more detailed manner, at finer level of resolution, but this will always be the case. Meanwhile, we test them against the current behavior of the climate system, temperature, precipitation, atmospheric and oceanic circulation, extreme weather events, and the paleoclimate record.

    But then there are the consequences.

    The models also show us that we should expect drought to spread. It has. We expect the continental interiors to dry out. They have. We expect the Hadley cells to expand. They have. We expect the dry subtropical regions to move northward, resulting in further drought. They have. At any given time, drought-like conditions covered 25% of the world back in the 1950s. Now its 30%. By mid-century it will be more like 50%.

    There are the heat waves that we have seen repeatedly in Europe, the probability of which in any given year has likely doubled or tripled. There is the heatwave we saw in Russia, something we should see only once in 300 years, but which is projected to become a common occurrence by mid-century.

    There is the flooding, In Iowa, my home state, there were two 500 year floods within less than 20 years of each other. There was the 1000 year flood in Tennessee. The flooding in Pakistan that covered 1/5th of the country. The flooding Central America, five feet in ten days. The simultaneous flooding in Thailand that covered 1/3rd of the country. Thailand is the second largest exporter of rice. The flooding in Australia that covered an area equal to France and Germany combined. We were actually able to measure, even map how sea levels had dropped and how the water was now up on land.

    But the drought is the worst. With drought comes famine. Yet rising sea levels will also inundate fertile river deltas, salting the earth. Rising levels of carbon dioxide will acidify the oceans. This is are already making it necessary to add carbonate to lower the acidity so that juvenile clams might reach adulthood. This is already eating away at our coral reefs, the fertile oases that are responsible for so much of the ocean’s bounty. The acidity is even proving to be detrimental to the development of juvenile fish.

    No matter how much evidence accumulates, how solid the scientific case or the strength of the scientific consensus, it is always possible to play a game of Cartesian doubt. Whether it be due to financial inducements, political ideology or shear pig-headedness, someone can always make some excuse not to accept the science, no matter the strength of the case.

    But in the meantime people will starve, and more warming will be put into the pipeline by the continued combustion of fossil fuels, as once there is a a net positive radiation imbalance, the Earth must gradually warm, over decades, before a new equilibrium is established. And due to the investments that are made in traditional and nontraditional fossil fuels, we will be committed to even greater warming, severe heatwaves, unprecedented flooding, widespread droughts, ever-present famine and poverty, in a world that will not fully recover for more than 100,000 years, more than ten times anything that we might call human civilization has existed on the face of this Earth.

  33. Kevin:

    My car does get 120 miles per gallon. No conspiracy, no bull. Just a Chevy Volt!

  34. Gee, I am always surprised when I hear of ‘unusual’ floods or droughts in Australia – its more of the ‘norm’ there, some droughts were famous enough even get historical names!

    These things were remembered by the oldtimers, but now everyone lives in the city…. and don’t care and can’t remember if it rained last week.

    1838-39 drought of the century. Stock were all but exterminated.
    1841 broke this drought with the champion flood of Queensland
    1852 flood swept Gundagai away drowning the inhabitants
    1901-2 The Federation drought – Australian wide
    1937 to 1945 The World War II drought – Australian wide
    1958-1968 Australia-wide drought (Australia’s longest) (40% drop in the wheat harvest, a loss of 20 million sheep in the last two years alone)

    others: “Some of the worst droughts on the Australian continent occurred in 1895-1903, 1911-1916, and in 1918-1920. Later, there have been some pretty bad droughts in 1982-1983, 1995-1996, and 2002-2003.”

    Some info here:
    http://home.iprimus.com.au/foo7/droughthistory.html
    http://www.bom.gov.au/lam/climate/levelthree/c20thc/drought3.htm#content-block