Once More

Let’s take a signal which everybody agrees is broadband. It’s called the sinc function, and I’ll choose the form

x = {\sin(\pi t) \over \pi t}.

Its Fourier transform is a rectangle function, equal to 1 for frequency between -0.5 and +0.5, equal to zero otherwise.


Let’s sample this function at equal time intervals (even sampling), starting at t=50, with a sampling rate of only 0.33, which is well below the limit we need to reconstruct the signal by traditional theory. Then let’s apply up-to-date Fourier methods such as used, say, in astronomy. We can easily model the observed data at the times of observation. Here’s the data as black dots and the model function as red circles:

even1

Clearly the model got it right for the observation times. But that’s not much help — we already know what the value is when we observed it!

So let’s take our model function and use it to estimate the signal at times we did not observe. I’ll plot it over a small enough time span (much less than covered by the data) so we can see the details:

even2

The solid black line shows the true function (which we know because we chose it beforehand). The thin red line shows the reconstructed signal. It ain’t right. Not even close. Of course at the times of observation (shown by black dots) it’s right. But that’s not much of an accomplishment, since we know the values at those times by direct observation.

OK let’s do the same thing, but using uneven time sampling. Same overall time span, same mean sampling rate (much lower than Nyquist), same methodology. Here’s the data and the model at the observation times:

uneven1

We got it again, but it’s still no big whoop because we already know the signal value at the observed times. Can we reconstruct the signal at times we did not observe? Here’s the true signal in black, the reconstructed signal in red, and observed data as dots:

uneven2

If you’re having trouble telling the difference between the true and reconstructed signals, that’s because the reconstruction is so good.

This reconstruction — which is dead on — did not require any a priori knowledge that the signal was band-limited. Or what the bandwidth was, or where the band was located. Or sampling the data at a high enough frequency to satisfy customary limits.

If all we knew about this signal was the observed data, then we most certainly could not make blanket statements about its character or its spectrum. We couldn’t say for sure that it’s a sinc function, even though modelling it with a sinc function by nonlinear least squares gives a perfect fit.

Therefore some might object that I haven’t “really” fit a broadband signal. After all, the discrete Fourier transform of the data looks like this:

dft

Which doesn’t much resemble a rectangle function and doesn’t seem to be very “broadband.” So, the argument might go, it’s still not possible to model a broadband signal sampled less often than traditional ideas require.

But the “signal” — the underlying signal from which we sampled data — is broadband. We know because we chose it ahead of time. And the sampling rate was way too low to reconstruct the signal by traditional theory.

If you want to say that the observed data are not a broadband signal, I’ll say fine. After all, I can reconstruct the data exactly — exactly — with a finite number of discrete Fourier components. I can do so in an infinite number of ways. Each of those would give different values at the times we did not observe, but the same values at times we did observe.

That’s just a reflection of the fact that we don’t know what the signal does when we don’t observe it. And we certainly don’t know what it did before we started looking, or what it will do in the future.

Given any finite set of data and I can express it as a finite Fourier series. If you then want to claim that all the nonzero values of the Fourier transform have bandwidth zero, then I’ll retort that you’re just hand-waving because it’s not possible to compute the continuous Fourier transform from a finite amount of data.

What we can do is choose from among the infinite number of possibilities that which is most plausible. If, for instance, I gave you data which followed a sine curve perfectly, then you’d be justified in suggesting that the signal was a sinusoid. I could easily construct different signals, with broadband spectra or multiple discrete Fourier components or some combination, which gave exactly the same data. There’s no way to prove which is the actual signal just using the available data. But if we had to bet on what future observed values would be, I’d go with the sinusoid.

And the truth of the matter is, that uneven time sampling really does enable us to discriminate among aliases without a priori knowledge of where the “band” is. And it really does enable us to reconstruct broadband signals when the mean sampling rate doesn’t measure up to traditional requirements. It doesn’t work all the time, in fact it’s easy to construct examples which will confound even the most state-of-the-art methods. But in the real world? It works.

38 responses to “Once More

  1. This recent series seems to be pedagogical in intent. To refute someone’s tendentious analysis? To prove some positive position? Or just so we understand something of what it is you do?

    [Response: It started with a question. But mainly it’s to push my uneven-time-sampling agenda. I recognize there are situations for which even time sampling is better (it can have much better noise properties). But there are many circumstances for which uneven time sampling enables us to do what even sampling does not.

    But you’re right, it’s time to get back to climate science.]

  2. Horatio Algeranon

    “Sample redundancy” (or lack thereof with the uneven time sampling) seems to be the key issue involved, which Rick White actually talks about in that presentation on “Golden Sampling” (albeit with regard to non-random non-uniform “Golden” sampling)

    • Horatio Algeranon

      should probably have said “lower redundancy with uneven time sampling”, since there will still be some.

  3. It was an interesting and instructive diversion, but indeed back to climate science.

    Consider the spikes of duration that tamino used. For everyone who thinks big spikes escaped notice, I offer the following:
    If a 1degC spike escaped notice, why not a 5degC or 10degC?

    I’d suggest that there is a monotonicslly decreasing curve,
    X axis :height of spike, from 0C to 5C.
    Y axis: probability that one or more spikes of that size escaped Marcott eytal and the other proxies.

    I’d say Pr(.1C) = 1.0.
    I’d claim Pr(5C) = 0.0.

    Rather than hand waving to claim Marcott wrong, I’d like to see that curve with some analysis to back it that fits physics and proxies. Right now, I think Pr(1C) ~0, brcsuse as boted elsewhere, i think bassive Maxwell demon hordes are needed in both directions, but I have no idea where Pr(X) ~0.5, for example.

    • Horatio Algeranon

      Robert Frost was one smart fellow. He was not a scientist (at least not a practicing one) and had most likely never even heard of Marcott (maybe “Et” or “Al”, but prolly not Marcott) but he got it basically right, when he said in one of his poems

      “We dance round ‘spikes’ and suppose, but the Secret sits in the data (and physics) and knows

      …or something to that effect

  4. I’ve been using these Fourier techniques for the last 40 years. First in astronomy (pretty much as indicated) and now medicine (think sampling glucose levels with a minimum of discrete measurements using a (costly) chemical strip and meter and trying to understand quasi-periodic functions for both history and prediction – and someone’s life depends on it). Given what I have read in the climate blogs, Tamino’s excruciatingly detailed step-by-step description IS necessary: most of the readers & commemorators I have read either stopped after a course in undergraduate engineering signal theory thus limiting the scope and breadth of application or have no understanding at all of signal analysis and rely upon those commentators and bloggers who they want to believe but have no basis for doing so (one way or another). At first I reacted like Jeff, but then realized that I had found that many of the scientists who work for me (and have worked for me) even with PhDs in image processing did not understand intuitively what Tamino describes and it was only after solving a few real-life problems upon which a product’s success depended did they come to appreciate the limitations of their education. By the way, everything Tamino describes is “in the book”, it just that most (almost all) students do not do those problems at the end of the chapter that give broader practice. Did you do all the problems at the end of every chapter in all your physics and engineering books? How many of your colleagues did? I know of only one person who did every problem in Jackson’s E&M as a graduate student. He was good!!

    • If he did all the end-of-chapter problems in Jackson without recourse to one of the infamous “study guides” he should be a tenured prof now….

      WIthout a doubt the nastiest textbook I ever encountered….

      • they made us do about half the Jackson problems in homework…and asked us some of the rest in exams

        nice text

        sidd

  5. This brief series has been interesting to me; I’ve never used uneven time sampling. I might in the future. Here’s a handwaving explanation for the paradox of measuring frequencies much higher than the mean sampling rate. Uneven sampling with random intervals contains a wide range of sampling frequencies. Since even sampling has a single sampling frequency, its sampling frequency has an upper limit. Since uneven sampling has higher sampling frequencies, it can measure signals containing higher frequencies.

  6. I guess I am curious enough about this to try it on my own.

    Could you post the code you used to generate the plots you showed?
    If not, could you describe your methodology so I am sure that I am not missing something?

    [Response: The basic methodology is outlined here. Particularly interesting for this case is section 6.]

    • Gak, the print on that link was hard to read. http://adsabs.harvard.edu/full/1995AJ….109.1889F is a less resampled link.

    • I am doing the same. The (Fortran-77) software is at http://www.aavso.org/software-directory which compiles and runs without any trouble using g77 on a MacBook Pro in X11. More detail and general background is also at http://www.aavso.org/time-series-tutorial

      The sinc function is also at the core of the Sampling Theorem, if written as S(f) = T sin(pi*f*T)/(pi*f*T) where f is frequency and T is record length. For an infinitely long record, this sinc function becomes a Dirac-delta function.

      Sampling a single datum is the multiplication of a delta-function and the underlying (contineous) process at time t. For even sampling this set of sampling delta functions is periodic (in the time step), but for random sampling it is not. I still have to work through this in the frequency domain where this time-domain multiplication becomes a convolution. I suspect this convolution with the true (and unknown) transform of the contineous process is at the core of Tamino’s argument and code.

  7. Sampling (with apologies to Horatio Algeranon)

    Some signals can hide
    when sampling is even
    though randomized rates
    give results to believe in.

    • Horatio Algeranon

      Why apologize?

      That’s excellent.

      Besides, IHHO, as a general rule, one should only apologize to the folks whose songs one ruins with parodies (so they don’t sue)

  8. Excellent post! I think Jeff is right: One can view the observation operator, i.e. the sampling, as a filter that acts on the original true signal. In practice it is a finite series of dirac-delta functions. Taking the Fouriertransform of this filter is straightforward and one can thus nicely demonstrate in Fourier space what the uniform or non-uniform sampling does to the original spectrum (a simple multiplication).

    Thanks for this! I have now to rethink our sampling strategies…

  9. Paul from VA

    So, this is a pretty interesting article series and has me seriously thinking about applications for uneven sampling. So here’s a question that’s bothering me. Say you have a regularly sampled time series over a very long time (say, a day of data sampled at 10Hz). Is there some way you could randomly resample the data to rule out or find harmonics above the nyquist frequency? Or does that not work, since the sampling itself is forced to be a harmonic of the sampling frequency? Is there some prerequisite on the randomness of the sampling in order for it to be uneven?

    [Response: Unfortunately, after downsampling all the time spacings will still be integer multiples of the basic interval (0.1 sec at 10 Hz). So you’ll still get perfect aliasing at frequencies separated by 10 Hz.]

    • Paul from VA

      Ahhh… okay, I think I follow. Basically, the highest frequency you can sample is derived from the least common denominator of your sampling rates. So if you alternated between sampling at 0.2 (1/5) s and 0.3333 s (1/3), the highest frequency you could detect would be 15 Hz (well, 7.5 with nyquist sampling). With something truly irregular you get some pretty whopping denominators. Am I understanding correctly?

      [Response: Basically yes. The strategy of “Golden sampling” (which has been mentioned a couple of times here) is actually designed to optimize your results, because with random sampling you can sometimes get “near-aliasing” by random accident.]

  10. Quiet Waters

    If you have daily samples accrued over a number of years but not the time to properly quantify every sample is there a way to determine the best way to subsample that data (even or uneven) and how many samples would be needed to get a good idea of the true trend/signal?

  11. Quiet Waters

    data=dataset of course…

  12. I think the proxy-accuracy deniers will still continue to make the claim that the proxies themselves are incapable of detecting a spike. They think the sudden rise in temperature is too rapid for the proxy mechanisms(ALL of them) to keep up and it exceeds some kind of magical limit inherent to the proxies(because Who is John Gault?!?!?!). There is nothing you can do to convince them.

    • After all, they live in a magical world where such a spike could exist without a physical mechanism to produce it … while at the same time a well understood physical mechanism can’t be responsible for the rise in the modern temperature record.

    • Horatio Algeranon

      “The Conspiracy of Nature”
      — by Horatio Algeranon

      Nature’s in on the Conspiracy
      Trying to hide the proof
      Like golden chests of piracy
      Nature buries the truth

      • But “ex-skeptic” marks the spot?

      • Horatio Algeranon

        Thanks Kevin, most excellent. Horatio will be sure to credit you

        Here’s the second verse:

        Ex-skeptic marks the booty
        With ink that is invisible
        Cuz hiding is his duty
        And truth is inadmissible

      • Horatio Algeranon

        Verse 3

        Captain Jones is at the helm
        Keeping things afloat
        Sailing the stormy IPCC’s
        And patching up the boat

        /////
        Most people (even pirate buffs) are probably not aware of this , but Davy Jones middle initial was “P”. Bet you can’t guess what that stands for.

  13. Somehow, to take uneven sampling time intervals does something like “chose the most plausible out of the very many possible functions, that give the right values at the measured points”. You mentioned that this is what we can do. “Plausibility” seem to me somehow related to the “most elegant, least costly” solution. It would be very interesting to work out, how this choice-of-the-most-elegant follows out of the unevenness of the intervals.
    A very important thing is, that what you say on the impossibility to find the “true” interpolation function of any data set hold also for finding the “true” future behaviour – only a plausible, or elegant one. This is relevant for the time series of september arctic sea ice area / volume and the question, when it might hit zero. (look e.g. here: https://sites.google.com/site/arctischepinguin/home/piomas)

    • Rewrite of one sentence: (sigh)
      A very important thing is, that what you say on the impossibility to find the “true” interpolation function of any data set holds also for finding the “true” future behaviour – only a plausible, or elegant one is possible to name.

  14. Gavin's Pussycat

    Hmm, the second picture screams for trying out regularization methods. If what I think is happening, that the problem is ill-defined in that there is an infinity (an entire sub-space of function space) of solutions satisfying those regularly-spaced data points.

    What about hybrid-norm estimation? It’s obviously not enough to minimise the square sum of the residuals wrt the data. What about also minimizing the square integral norm of the solution? How much closer would that get you? (This might be the way to go if regularly spaced data is all that’s on offer.)

  15. Jerry Hopkins

    Tamino, you don’t consider the coherence time (the typical time tau above which the autocorrelation function \int (S(t) S(t+tau) dt vanishes . Your sampling is good because the time interval is less than a correlation time, then the Fourier transform is “regular” at these frequencies, so you don’t really need a good sampling; of course with a much more irregular signal, if you add spikes between your observations times, you would be unable to reproduce them since they are simply not detected and don’t change the Fourier transform.

  16. An astronomy example that I think illustrates the point that Tamino is trying to make. A few years ago I was asked if I could come up with a short project for a group of visiting high school students. I thought it would be interesting to show them the radial velocity data from one of the exoplanet hunting projects. The radial velocity is essentially the line-of-site velocity of the star, that varies in a sinusoidal-like fashion as the star and planet move in their orbit. I thought it would be nice for these students to see some real data and to then use it to make a few basic calculations about the mass of planet and its distance from the star.

    I’d actually never looked at this data before, so at home the evening before the project was going to take place I downloaded the data and started making a few plots. What I immediately discovered was that what appeared on the plots was simply what looked like a noisy signal. No sign of some kind of nice sinusoidal signal – which was what I rather naively expected. Realising that I had to do something quickly and, admittedly, knowing that this was radial velocity data, I wrote a very basic code that – for a given data set – would run through all the possible periods P (i.e., I assumed it was unknown, even though it was) and replot the data with the time variable modified to be t’ = t – nP where n was an integer given by n = INT(t/P). What typically happened was that a clear sinusoidal signal would pop out when P equaled the known period of the orbit. I gave this to the students the next day and they seemed to quite enjoy downloading the data and playing with the code to determine the orbital periods and other properties of the planetary system.

    In fact, if you look at any journal paper showing plots of the radial velocity of a star with a companion planet, you don’t see the radial velocity plotted against time of observation. What you see is radial velocity plotted against phase of the orbit. They’ve essentially done exactly what I describe here.

    Just as Tamino describes, this was unevenly sampled data with a sampling period that was typically much longer than the period of the signal that the observers were trying to detect – but as long as enough data is collected, the actual signal is easily extracted.

  17. Poul-Henning Kamp

    I too love uneven sampling, but I break out in rashes when people use the word “random” as casually as the do above.

    Please notice that if your uneven sampling happens to have a fundamental alignment in time, for instance if sampling always happens at the top of the hour, rather than at any time during the hour, you introduce a “comb-filter” on the sensitivity.

  18. Astronomers here might remember the debate concerning the time delay in the first gravitational-lens system discovered, 0957+561. One camp favoured a long delay, the other a short delay. (This is an important question since the delay is inversely proportional to the Hubble constant.) Bill Press got it wrong. Jaan Pelt (an astronomer from Estonia with no background in cosmology nor in gravitational lensing but with vast experience in irregularly sampled data analysis (he even wrote a program package—ISDA—to do this)) and collaborators in Hamburg got it right. Such situations occur often in astronomy; observations are irregular because one doesn’t get all the observing time one needs, the Moon is too bright, it is too cloudy etc.

  19. The link to the better quality copy doesn’t work directly but with enough fiddling you can get to it. Not sure how to make it easier. The first link posted above seems to be to the output of “print” which is very rough high contrast. The second one, I dunno how to make it clickable. I got to it finally via Scholar which gave a link to another article in the same journal, then I changed the page number and it opened up a clean readable copy.

    http://adsabs.harvard.edu/full/1995AJ….109.1889F

    http://articles.adsabs.harvard.edu/cgi-bin
    /nph-iarticle_query?bibcode=1995AJ….109.1889F&db_key=AST&page_ind=0&data_type=GIF&type=SCREEN_VIEW&classic=YES

  20. The way I view this is through a lens of information theory. Any series contains a certain amount of information that can be represented as a bit stream. If the series is truly random, the information cannot be compressed–that is, the series can only be represented by a series with an equivalent number of bits (or more) without loss of information. However, if the series can be represented by a function, it can be summarized with a finite number of bits less than the bitstream of the series. For instance, a sine wave can be sumarized by its frequency and phase.

    The measurements also represent a given amount of information, and a truly random sampling regime will maximize the amount of information in the measurements. Again, that information may be summarized in more compact form, but only if there is regularity in the underlying signal being measured.

    Basically, this is sort of using the definition of randomness that Kolmogorov came up with and stuck with longer than any others (note, he had many definitions of randomness–it was always the most difficult concept to define for the frequentists).

    I don’t know if this makes things clearer for others, but it helps for my own addled mind.

  21. I like it. Please, do keep attacking the outright liars (Watts, et al), but these series are useful and interesting. I’ve learned things.

  22. Robert Grumbine asks what is normal: http://moregrumbinescience.blogspot.fi/2013/04/when-was-climate-normal.html#more
    This would likely be easily checked with comparing a synthetic white noise set with the temperature record and see which period differs the least with it. I also strongly suspect Mr. Grumbine has done so but leaves it as an excercise to readers. Just pure statistcics, not having anything to do with aerosols and GHG.

  23. I’d like to point out that a short slice of sinc(t) that lies in a safe distance from t=0 is not broadband. In fact, it is quite narrowband with a distinct spike in its spectrum corresponding to its “carrier frequency”. Add some noise and you’ll get graph #5 in the article.

    [Response: Then why don’t I just point out that no finite time series is broadband, so when it comes to observed data there’s no such thing as a broadband signal? I can express every one — exactly — with a line spectrum.]

    • Compute DFT of sinc(t) for (1) -150<=t<=150 and (2) 50<=t<=350. Compare results. The former result is a slightly distorted but recognizable discrete approximation of the continuous rectangular spectrum ("0|||…1……2.."). But the latter is completely different–a narrow peak at f=1/2 tapering off on both sides ("0.,,|,,.1….. 2 ").