Single-trial power, dB-scaling, and permutation approach

Dear Mike and list mates,

I have an unusual situation where I would like to work with decibel-scaled single-trial power, but I am coming up against a problem related to very small values in my single-trial time-frequency data. E.g., the mean value in my frequencies x samples map for any given trial is on the order of 100,000 mV^2, but I have some values in each trial that are less than 1 mV^2. When I average over trials first, take the baseline in each channel, and then convert to dB (the usual approach), the resulting TF maps look fine and are scaled as expected. But, when I dB-scale the single trial data and then average over trials, these tiny values are listed as -Inf in matlab and clearly stand out in the plots against the “actual” effects.

The reason I am even attempting this is because I would ultimately like to run permutation testing at the single-subject level to identify post-stimulus time-frequency clusters that reliably differ from baseline. For that purpose, my understanding is that I would need to first dB-scale the data if I want to treat all frequencies equally (e.g., divide both the pre/post stimulus intervals by the average over the whole trial in each freq. and then multiply by 10*log10), and also randomly permute pre/post-stimulus intervals at the single-trial level.

My questions are:

  1. Do these small values likely reflect an error in how I am running the wavelet transform, or are they to be expected in single-trial TF data?
  2. If they are to be expected, is there another strategy I could use to avoid/eliminate them while still working with single-trials?
  3. (Possibly) is there a better way to achieve my overall goal?

Thanks in advance for any help!
Matt

1 Like

Hi Matt.

  1. You’ve discovered the difficulty of single-trial dB :wink: The issue is with the division, which is an unstable operation when the denominator (the baseline activity) is really small.
    -Inf’s sound like an error. That would happen if you try to take the log of a negative number. Are you doing subtractive baselining inside the log?

  2. You could try taking the log without baselining. That will make your power data approach a normal distribution.

  3. But do you even need to baseline? If you are applying within-subject permutation testing, the null-hypothesis distribution will have the same 1/f distribution, which means the parameters you extract from the permutation testing will have the 1/f removed. Same goes for any background task-unrelated activity.

Hope that helps!
Mike

2 Likes

Thanks Mike,

These tips are really helpful! Just to follow-up, after digging a bit, it looks like the -Inf’s are coming about when I take the log of 0 values (I am doing the regular division-based baseline and then taking the log), and unfortunately when I just take the log in the single trials without baselining these very small values are still popping out.

But you’re right, strictly speaking, I don’t need to baseline, I’m just trying to deal with 1/f. Seems like I need to think a bit more on the permutation piece, so I’ll implement that and see what it produces.

Thanks!
Matt

1 Like

A typical solution for log(0) is to add a small number inside the log, like log(data+eps) or log(data+1e-10). That will prevent -Inf’s, and instead produce negative values around -50 to -30.

But if these are power values, I don’t think you should get a value of zero. That might indicate that there’s a bug in the code somewhere.

1 Like