Minimal number of cycles for wavelet analysis

Hi Mike,

I am interested in running a wavelet analysis, with my focus being on the temporal dynamics of ITC measure, without much regard for any specific frequency domain. That is it is more important for me to know when ITC is high/low than at was frequency.

It is my understanding that

sigma_t = C / (2 * pi * f),

such that a lower number of cycles (C) will yield higher temporal precision.
My question is… How low can I got?

I’ve read (possibly in your book?) that we shouldn’t go below 3 cycles. Is that always the case?
How about below 1 cycle?

Other than resolution in the frequency, what other price am I paying for low number of cycles?

Grateful of any insights you might have.

Sure, you can set it to 0.0000001. But would that give you sensible results that you can interpret? Not really.

Keep in mind that phases are only interpretable if they come from a relatively narrow frequency range (see discussion in filter-Hilbert section). So if the frequency range is too broad, the phase angles are not interpretable, because the phase value at each time point is potentially coming from a different frequency.

My recommendation of 3 cycles is just a general rubric; it’s certainly not a rule or mathematical constraint.

Got it - so I need the frequency-specificity to be at least reasonable.


The background for all of this is that I’ve found a rise in ITC in two experimental conditions in some time window (see illustration below), but in one of the conditions the rise is smaller.
However I suspect that this smaller-rise is an artifact of the drop in ITC later-on, which is affected the time-of-interest due to the temporal precision of the wavelet.

Does this sound reasonable?
If so, other than a verbal explanation, do you have any thoughts of some way to validate my intuition?

image

Thanks!

I don’t see how that difference would be an artifact. Spurious condition differences can happen for a few reasons: different parameters applied to different conditions (all conditions should have the same analysis parameters), large differences in trial counts (should ideally be roughly matched), artifacts in the data (should be removed during cleaning).

If those aren’t the case, then I think the conclusion is that the difference is real: One condition has stronger and longer-lasting ITPC than the other.

Both conditions have the same number of trials and use the same stimuli, and were analyzed with the same parameterizations. The only difference between them is was happens after the timewindow of interest: a secondary stimulus is or isn’t presented. So there shouldn’t be any difference between them in the time window.

I hope to illustrate my point further with some simulated data (reflecting what I think is happening):

Say this was the raw underlying data in both conditions:

(In reality I am looking at lower frequencies than the ones showen here)

We have a time period of low ITPC (-0.2 to +0.15), then a period of high ITPC (+0.15 to +0.4 - this is the time window of interest) and then either a reduction in ITPC or not.

Assuming the temporal resolution of my wavelet was low - say $\sigma_t$ = 0.6 (which is wider than the whole window of interest), wouldn’t the low ITPC in the third time window affect the measured ITPC in the second time window, resulting in something similar to what was seen in my previous illustration?

This is what happens for power, no? (which is why we take a baseline that doesn’t end at time 0 - to avoid any “bleeding” of what happens after stimulus onset onto the baseline adjustment period?) Does this not also apply to ITPC?

Thanks Mike!

Ah yes, I see your point. There is temporal smearing imposed by the filtering (wavelet convolution), and narrower wavelets will help alleviate that situation.

If you have multichannel data, you could also consider designing a spatial filter (e.g., here) to maximize activity in that frequency band. That would allow you to use a gentler temporal filter.