A reader recently asked about a news item regarding recent results from the CryoSat-2 satellite mission. One of its purposes is to measure sea ice thickness throughout the Arctic. By combining that with data for sea ice concentration, one can estimate the total volume of Arctic sea ice.

The news item is titled “Arctic Sea Ice Up from Record Low.” It tells of findings reported at the recent meeting of AGU (the American Geophysical Union), when a team studying the CryoSat-2 data announced that Arctic sea ice volume had increased substantially from October 2012 to October 2013. In 2012, October sea ice volume averaged about 6000 km3, but in 2013 that figure rose to about 9000 km3 — a 50% increase in a single year.

Unfortunately the CryoSat-2 data only go back to 2010, and even more unfortunately the satellite isn’t expected to have a very long lifetime (only a few more years). That means that in order to place the recent changes in perspective, we need sources of information besides just CryoSat-2.

This can be especially tricky because no data set, including CryoSat-2, is perfect. As stated in Laxon et al. (2013, GRL, 40, 1–6, doi:10.1002/GRL.50193),

The absolute thickness estimates from CS-2 may be subject to biases from a number of different sources. Our assumption that the radar penetrates to the snow-ice interface is still the subject of investigation and may also introduce errors into bias our thickness estimates [Willatt et al., 2011]. Additional uncertainties may be introduced due to uncertainties in our assumed snow loading and ice/water densities, employed when converting freeboard to thickness. For this reason, it is important to compare our CS-2 retrievals with other sources of co-incident ice thickness data. We use three independent data sets that allow us to verify the CS-2 retrievals over a wide area, including both first- and multiyear ice, and over an entire ice growth season.

One source for more perspective PIOMAS, which estimates ice volume by combining observations with a computer model of the ice. Here’s a graph of the two data sets over the 2010/2011 and 2011/2012 ice-growth seasons:


Clearly, the PIOMAS data are consistently lower than the CryoSat-2 data. But the important thing is that they both show similar patterns of change over time, although there are differences even in the patterns, including the seasonal changes.

Nonetheless, the PIOMAS data are sufficiently representative of the changes to enable us to get that perspective we’re seeking. Here’s the PIOMAS data for October of each year from 1979 through 2013:


Note that PIOMAS also indicates a substantial increase in October Arctic sea ice volume since last year, enlarging by 42% (rather than the 50% suggested by CryoSat-2). To put it in perspective we need to pay attention to what led up to 2013. The overall decline is evident.

Last year’s difference between the raw data value and the smoothed value (shown in red in the graph) is the largest positive difference on record, although not by much. Some might regard this as raising the possibility that Arctic sea ice loss is starting to level off, but the most recent October value really is well within expectation even given the existing trend, so it’s certainly too early to draw such a conclusion.

One conclusion we can draw, with high confidence, is that the news announcement has been subject to “spin” by fake “skeptics” of global warming.

25 responses to “CryoSat-2

  1. Of course, the use of percent to indicate the increase is somewhat misleading, as every time the previous year sets a new record, you have a smaller denominator to work with.
    “Last year was down 20% from the previous year.”
    “This year is up 25% from last year!”
    “Recovery!” [Just don’t look at the actual volumes, where this year matches two years ago…]

  2. Please put the bottom of the graph at 0 on the y axis. At first glance 2012 looks like close to zero, when it’s actually above 4 (km3). Seems to me this is especially important when “% increase/decrease” is part of the dialogue.

    [Response: No, thank you.]

    • Be patient; Tamino will be putting the zero in within just a few years at this rate.

      • Well I have to admit I’m waiting for the first person to claim a greater than 100% recovery six months after a record low. That day is getting close – and I know I’d collect if only I bet.

    • At first glance 2012 looks like close to zero…

      Newsflash – that’s why axes have numbers. The convention is to number such that the full range takes up most of the scale – not to start from zero unless it is a bound for the range.

      This is all in the first lesson of how to construct and interpret a graph.

    • dikranmarsupial

      see finessing the axes of a graph to de-emphasise the variability is an old trick (by which I do not mean “mathematical device”). Padding out the graph with meaningless blank space is just another variation. Most mathematical/statistical packages will automatically select the axes in a sensible manner, you should have a good reason not to follow those defaults as there are good reasons for the default behaviour.

      [Response: Although I chose not to, there is value is starting the y-axis at zero. First of all, these are what is called “ratio data” (meaning that there is a meaningful zero point), and second of all, people actually *do* tend to talk about ratios (50% increase! — personally I usually don’t care for such comparisons). Finally, there is a realistic prospect we could hit zero in the foreseeable future.]

      • With my co-worker, who is working as a credit rating analyst (he is statistician) have been joking this way: Last year, the company had, let say, 1000$ loss. This year, the loss has increased to 2000$. Whow, that is 100% increase in profit ;):D

    • I was looking at the local stuff and I came across this.

      I’ve always thought that at least on occasion it would be good to remind people of what the *actual temps* are, rather than anomalies and variations.

      A good image gives you a chance to discover things for yourself; the scale here puts things in perspective for me.

      I would wager 80% of denialists who comment online don’t know what the absolute MST is, and so don’t have any sense of the significance of the numbers they are throwing around.

  3. I was one of the 50 or so people in the session at the Fall AGU 2013. Thought it may be of interest to see what I live tweeted at the time.

    .@rachel_tilling on #CryoSat sea ice data and volume changes 2010-2013. Lovely tribute to colleagues Laxon and Giles at beginning #AGU13

    .@rachel_tilling good talk for non-specialists and bit of a master class in how #CryoSat sees sea ice and they derive ice thickness #AGU13

    So she elaborated the points Tamino makes about how tricky it is to derive ice thickness, and then volume. (as Laxon said in his abstract).

    .@rachel_tilling first look at #Arctic sea ice pack as a whole. Shows 2011 has thinnest ice. Sea ice Extent does not equal thickness #AGU13

    Now a key point:
    .@rachel_tilling #CryoSat data shows large Arctic sea ice extent changes over last few years do not map onto volume decreases #AGU13

    So obviously dynamics of the ice pack is very important.

    .@rachel_tilling total #arctic sea ice volume decreased ~400 km3 2010-2012. [a key point is extent is not equal to volume] #AGU13

    Now I *chose* not to live-tweet the point about the volume increasing in the very late data because I thought there was no way I could live tweet the complexity of what Tilling said into 140 chars without it being a potential train crash allowing easy misinterpretation of the complex points she made, and which Tamino notes above.
    These points are of course linked to also my tweets above.

    Dr Sinead Farrell who was chairing the session live-tweeted this from the stage.
    @rachel_tilling latest update from #CryoSat-2 suggests increased volume of #arctic sea ice in Oct ’13 (consistent with increased extent)

    And if you look at the BBC www site you can find this report by Jon Amos
    “Esa’s Cryosat sees Arctic sea-ice volume bounce back”
    Worth 5 mins of your time.

    And if you haven’t time / inclination to read the BBC link then here is what the top drawer sea ice expert Dr Don Perovich is quoted:
    “Dr Don Perovich is a sea-ice expert at Dartmouth College, US.

    He said Cryosat’s data tallied with observations made by other spacecraft.

    “In previous summers, some of the [multi-year ice] migrated over to the Alaska and Siberia areas where it melted. But this past summer, it stayed in place because of a change in wind patterns. And so there’ll likely be more multi-year ice next year than there was this year,” he told BBC News.”

    (Dr Perovich )

    Rachell Tilling made all these points in here excellent talk IMO. But it is bordering on the inevitable that some would attempt to distill the CPOM groups excellent work into “everything is getting better”.

    As Perovich says “on this years report card [2013] Arctic sea ice gets a “D” ”

    Mark Brandon

  4. We should be very thankful that we have PIOMAS. Imagine if all we had was a few years of ICESat and now CryoSat – what possibilities for nonsense that would present!
    And as we do have PIOMAS, are short term percentage changes the best values to be presenting? I don’t think so. October 2013 – the iciest October since 2010. ‘Wow! That’s dramatic news!!’ The 31st iciest October since the start of the satellite era (35 years ago). ‘In the top 40 iciest Octobers. I thought it would be!!’
    And if percentages are really needed, the relevant one is surely 65% ice loss since 1979. At that rate, the Arctic October will be ice-free by 2030. Well that’s more reassuring than calculating it from last year’s October ice volume. October 2012 gave a rate yielding an ice-free October by 2024.

  5. i find the collapse shown by PIOMAS even more staggering when zero is at the bottom.

    • Good point. That graph is much more dizzying (or is it the caffeine high?). It puts the loss into a more sobering perspective. However, I don’t disagree with Tamino representing the data the way he did—he almost always sets the upper and lower bounds to not leave excessive space at either end, and anyone who knows how to read a graph should be fine with this.

  6. But do the fake skeptics actually look at PIOMAS or ice mass at all? All I read about in comment fields everywhere is the ice extent. Neither do they care much for the amount of multi-year ice which is dropping off a cliff too. As jiminy points too, I am also expecting there to be a “100% recovery” story in the Daily Mail or something in the near future. :) – No doubt when its all gone, it must have been eaten by the polar bears and then they all died from brain freeze.

  7. Tamino said: “Unfortunately the CryoSat-2 data only go back to 2010, and even more unfortunately the satellite isn’t expected to have a very long lifetime (only a few more years).”

    OK. This is really only tangentially on topic, but I’d like to highlight a trend I’m seeing in satellites–that of shorter and shorter mission lives. This trend is being driven by the need to contain cost–space is a nasty place, and surviving for long periods in space requires lots of testing and quality assurance. So, I’m seeing more and more 1-2 year missions, even down to 6 months. And for cubesats, etc., we’re talking about missions in days or weeks. It seems to me that this is going to pose a huge obstacle to interpreting these data, so at best you’ll have a lot of wasted orphan datasets, or at worse, you’ll have lots of datasets that can be cherrypicked to obfuscate a la mode de Lindzen. I’m wondering if folks have thought about how to deal with this situation.

    • Overlap and calibration?
      I don’t know what the cost tradeoff is per year of (expected) life, but if launching gets cheaper from private sector input, why not just plan on a four year life and send up the replacement after 3 years?

      • Generally in science missions you apply for funding for a mission that will last X years, and use the funding to pay for mission design, operations, spacecraft construction, and launch — in that order.

        Given that you only have funding for X years, you design a spacecraft that will last X years plus a tolerance, because to design a spacecraft that would last a very long time would be a waste if you can’t operate it anyway.

        As a funding manager, you could fund a mission for $Y dollars that will last 10 years. It gets in the news at first light, then fades and is just a quiet workhorse for the rest of its life. Or you could fund two missions at $Y/2 dollars each that will last 6 months each and get you in the news twice as much. The better the PR, the more funding the government allocates you next year. What do you do?

      • That sounds like a very good idea.

      • mgardner,
        Yes, overlap and calibration is the usual technique. However, the low cost of these birds means a lot of them are being put up by universities with little or no coordination. And 4 years now is in the Class B range. Most of these missions are 6 months or less. On the positive side, you don’t have to worry much about wearout or total dose failures. I’d also point out that a launch can slip more than the mission length of these birds. Funding is so tight now, pretty soon we’ll be trying to launch our satellites on Estes rockets.

      • The ground operations and data archiving for satellites drive the short missions. There are satellites that NASA would gladly turn over to others to get out from under those costs.

    • The Orbiting Carbon Observatory have to be one of the shortest one, never reaching orbit even. One could even consider the thought that the mission was sabotaged as this mission is definitely going to deliver bad news about the fossil fuel industry. Who knows, perhaps there is some lobbying to keep this from ever being launched with cuts in budgets everywhere.

      • Sabotage seems unlikely; launch failures are very common even when everyone is trying to succeed. Lobbying is not unlikely at all.

  8. John,
    No sabotage needed. The launch vehicle has a poor success record. The lowest bidder and all. We now say OCO stands for Ocean Carbon Observatory.

  9. Tamino,

    I’ve previously blogged on why there was no rebound in 2013, because the definition of “Rebound” is “To spring back as from a sudden impact.” That is not what happened.

    Using PIOMAS data, 2013 showed no over-winter volume rebound, comparing May 2013 and 2012 shows that May PIOMAS volume was only 0.15k km^3 above 2012, yet by June 2013 was a substantial 1.53k km^3 above 2012. What happened in 2013 was that the Spring Volume Loss failed to occur as aggressively as in the other post 2010 years, this then had knock on effects on volume throughout the summer.

    There are no grounds to reach the conclusion that 2013 was either a rebound or that it signals the start of an increase of sea ice volume.

    Open water formation efficiency is a measure of how effectively open water forms for a given thinning. It can be viewed in terms of the percentage of open water formed as a function of April thickness.

    After Keen et al, 2013, “A Case Study of a Modelled Episode of Low Arctic Sea Ice.”, figure 2.

    The volume increase of 2013 still means that much of the pack is at present under 2m thick, and further thickening will mean the pack is only marginally thicker than April last year (around 2m thick).
    Jan 2013

    Jan 2014

    Note that most of the thickening is in the Central Arctic.

    Whilst a re-run of 2012’s record loss is unlikely without very favourable weather, typical ice thickness will still be in the region of most rapid change in the above plot of percentage of open water formed as a function of April thickness (first plot this comment). Which means substantial open water production is still likely next year, I suspect not record setting, but on par with the post 2007 years.

    Anyone spinning 2013 as the start of a rebound is effectively donning a dunces cap which bears the legend “Ignore me. I know nothing about Arctic sea ice.”

  10. Michael Whittemore

    Great post :)