Dear Mike and list mates,
I have an unusual situation where I would like to work with decibel-scaled single-trial power, but I am coming up against a problem related to very small values in my single-trial time-frequency data. E.g., the mean value in my frequencies x samples map for any given trial is on the order of 100,000 mV^2, but I have some values in each trial that are less than 1 mV^2. When I average over trials first, take the baseline in each channel, and then convert to dB (the usual approach), the resulting TF maps look fine and are scaled as expected. But, when I dB-scale the single trial data and then average over trials, these tiny values are listed as -Inf in matlab and clearly stand out in the plots against the “actual” effects.
The reason I am even attempting this is because I would ultimately like to run permutation testing at the single-subject level to identify post-stimulus time-frequency clusters that reliably differ from baseline. For that purpose, my understanding is that I would need to first dB-scale the data if I want to treat all frequencies equally (e.g., divide both the pre/post stimulus intervals by the average over the whole trial in each freq. and then multiply by 10*log10), and also randomly permute pre/post-stimulus intervals at the single-trial level.
My questions are:
- Do these small values likely reflect an error in how I am running the wavelet transform, or are they to be expected in single-trial TF data?
- If they are to be expected, is there another strategy I could use to avoid/eliminate them while still working with single-trials?
- (Possibly) is there a better way to achieve my overall goal?
Thanks in advance for any help!