Easy Question

A question was asked recently:


Sorry, it’s not about Arctic not Antarctic, but the Arctic thread seems to be already closed for commenting. There’s a thing I can’t understand about Arctic sea ice – how come according to NSIDC the maximum sea ice extent in 2013 was the 6th lowest in satellite history while Jaxa is showing it to be higher than the 2000’s (not even 2010’s) average?

http://nsidc.org/arcticseaicenews/

http://www.ijis.iarc.uaf.edu/seaice/extent/Sea_Ice_Extent_L.png

The short answer is: in the JAXA plot, the line labelled “2000’s Average” isn’t the 2000’s average.


That’s because JAXA data don’t start until June of 2002. Therefore they don’t record the maximum extent in 2000, 2001, or 2002.

We can compare extent figures from NSIDC and JAXA:

compare

They reach clearly different annual maxima, with NSIDC reporting more ice at maximum than JAXA, but their annual minima are about the same. If we look at the difference:

differ

we see there’s a strong seasonal pattern. I don’t know, but I suspect that’s because there are regions which NSIDC includes but JAXA doesn’t (perhaps Hudson’s Bay and/or the Great Lakes?). But that’s not the reason for the puzzlement.

If we compute the annual maximum extent for each record we get this:

maxima

Note that although they’re offset from each other, by and large they show the same pattern of changes. The horizontal lines show the averages during the time span plotted, but that’s not the “2000’s Average” because it doesn’t include 2000, 2001, or 2002.

We can also plot the ranks of the annual maxima:

maxrank

which shows that this year did indeed reach the 6th-lowest maximum in the NSIDC record, but the 5th-lowest in the JAXA record.

If we compute all maxima (which includes a lot more data from NSIDC) we see the trend in maximum extent:

allmax

14 responses to “Easy Question

  1. In addition to Hudson Bay and the Great Lakes the role of the Baltic Sea is unclear to me at least . NSIDC shows it on the map.

    The ice cover there is quite variable. Max can be 410 000 km2, lowest observed only 40 000 km2.

    This year the max was appr. 130 000 km2, somewhat below the long term average of 200 000 km2. Reported ice cover there can be anything over 10 cm thick and is based on merged satellite, aircraft and ship observations.
    http://www.itameriportaali.fi/en/itamerinyt/en_GB/jaatilanne/

  2. Both measures are based on different algorithms, sensors and ground resolution and concentration threshold. Therefore there is a systematic bias between methods.

  3. Thanks for this post – and to the person who asked the question. Have been wondering the same thing myself

  4. “Both measures are based on different algorithms, sensors and ground resolution and concentration threshold. Therefore there is a systematic bias between methods.”

    Yes. The IJIS plot also has 1980s and 1990s on it. Like the early 2000s years, these don’t come from the AMSR-E satellite sensor and associated algorithm(s). So they’ve stiched together a variety of sources.

    From their website:

    • Jan. 1980 – Jul. 1987 : SMMR
    • Jul. 1987 – Jun 2002 : SSM/I
    • Jun. 2002 – Oct. 2011 : AMSR-E
    • Oct. 2011 – the present : WindSat

    IIRC AMSR-E failed which is why they switched most recently?????

    NSIDC archived AMSR-E data but for their sea ice extent calculations …

    “Scientists at the Goddard Space Flight Center have combined the SMMR and SSM/I data sets to provide a time series of sea ice data spanning over 30 years”

    So they have more continuity in the sense that they’ve only stitched together data from two sensors, and my understanding is that the two sensors are very similar.

    IJIS sacrificed a certain level of continuity by concentrating on AMSR-E data which I believe has finer resolution than SSM/I???? I think the plots of their 1980s, 1990s and early 2000s average extents should be interpreted to be more of a rough guide than being ultra-precise, given that it’s derived from data from three sensors each with different characteristics being stitched together.

  5. “Both measures are based on different algorithms, sensors and ground resolution and concentration threshold.”

    The last isn’t quite right, they do both use 15% as the concentration threshold. I think it’s the university of bremen plots that use 30%???? Something like that. I do know that not all teams use the same threshold, but in the case of IJIS and NSIDC they do.

  6. ” given that it’s derived from data from three sensors each with different characteristics being stitched together.”

    Sorry, the same two as NSIDC uses, dominated by SSM/I (15 years). It’s the use of AMSR-E data starting in 2003 that causes a possible discontinuity with the past averages.

  7. Apologies for this off-topic question, but comments are closed on the relevant blog page.

    I have reproduced the 2-box model in the ‘Once is not enough’ post – it looks great, a very good match to the GISS series. In that post it describes the forcing function exponentially smoothed with two time constants which are 2 and 26 years. However, when I learned how to do exponential smoothing, it had to be done with smoothing factor between 0 and 1. So, how do my smoothing factors (I used 0.03 and 0.22) translate into years? Are they proportional to the length of the data series used, or something?

    Thanks as always to Tamino for this fascinating blog.

    [Response: One way to parametrize an exponential smooth is to choose a constant \lambda (between 0 and 1) and define the smoothed values by

    S_n = (1-\lambda) S_{n-1} + \lambda x_n.

    Another way is to choose a time constant \tau = -1/\ln(1-\lambda) so the smoothed values are

    S_n = e^{-1/\tau} S_{n-1} + (1-e^{-1/\tau}) x_n.

    Just a different parameter definition.]

  8. Mats Almgren

    Sorry for some more off topic questions, on the Once is not enough post.
    I have returned to this post several times, and remain equally impressed. Still I would like some clarification.
    1. In the formulation of your 2-box model you allow for different forcing functions in the two boxes, but I take it that in the analysis you use the same function for both. Correct?
    2. In your first application of the model, back in 2009, I understand that you prescribed the two time constants, whereas you now state that 2 and 26 years gave the best fit. How well-defined are those values?
    3. In the 2-box model Isaac Held posted, the second box was in the end only used as a heath-sink, a 1.5-box model in a way. Would your results change if you added a similar deep ocean heath sink in to your model? In particular, would there be a further heating in the pipeline?
    4. Recently a paper “Time-varying climate sensitivity from regional feedbacks” by Kyle C. Armour et al. was published. Their approach with different regions seems very similar to a multi-box model. In such a model, would not ENSO effects be contained in the heath exchange between different regions? So that, instead of using the SOI-model, a sufficient number of heath-exchanging boxes would do? I do not suggest that you should do it in this way, there are certainly good reasons against it. It is just my way to try to understand what is going on.