Recovering original units of signal following multitaper

Hey Mike,

I was watching one of your videos for recovering the true units of the signal after computing the FFT and wavelet decomposition, and I was wondering how to go about this issue when using multitaper?

With respect to the FFT, shouldn’t you divide by the sampling frequency instead of the number of samples for Parseval’s Theorem to hold (the accepted answer in the this forum convinced me: https://math.stackexchange.com/questions/636847/understanding-fourier-transform-example-in-matlab)?

Finally, I noticed that you multiply the amplitude of the wavelet decomposition by 2, but I don’t remember reading about this in your book or seeing it in you FWHM paper. I’ve been using the FWHM method as of late, and I was wondering if I should add this doubling (excluding DC and Nyquist) to your code?

Thanks!

–G

Hi G. As I mentioned and written and said many times before, normalization is tricky business and I generally recommend avoiding worrying about it unless necessary. That said, of course I’ll answer your question :wink:

The “correct” normalization depends on your goal. For Parsecal’s theorem, you divide by nfft. That post on math-stackexchange that you linked is actually doing the same thing. If you write out the math from the code, the response suggests to scale the time domain signal by 1/fs and the frequency domain signal by 1/(fs*nfft). You can simply pull out the 1/fs from both terms and only have the frequency domain divided by nfft. That solution is also correct because it follows from the calculus (dt, df), but the algebra and discretization on computers allows us to ignore that common factor.

This normalization, however, is “incorrect” at recovering the energy at the frequency in the signal, because the zero-padding distributes more energy around the spectrum. So to recover the units of the signal at a particular frequency, the normalization is dividing by the length of the signal, which is not the same thing as dividing by nfft if you are zero-padding.

So the conclusion here is that normalization is tricky :wink: I’d be hesitant to say that one is right or wrong; different normalizations serve different visual/statistical/mathematical/practical goals and have different implications.

The scaling factor of 2 for amplitudes is to sum the positive and negative frequency sides of the spectrum together. For a real-valued signal, they mirror and so you can double the positive frequencies and ignore the negative frequencies. A complex-valued Morlet wavelet is single-sided (zero energy in the negative frequencies), so that one actually probably shouldn’t be doubled. If I did it in a video, it might have been a mistake, unless there was some specific reason why I did it.

Finally, some starter code to help you explore these concepts:

% parameters
signaldur = 2.4; % seconds
signalAmp = .4;
nfft = 29383; % fft zero-padding
srate = 1000;

% create the signal
time = (0:signaldur*srate-1) / srate;
signal = signalAmp*sin(2*pi*10*time);

% Parseval's theorem
energyTime  = sum(signal.^2);
energyFreq1 = sum(abs(fft(signal,nfft)).^2)/nfft;
energyFreq2 = sum(abs(fft(signal,nfft)).^2)/srate;

[energyTime energyFreq1 energyFreq2]

% amplitude spectrum (peak should match signalAmp)
plot(linspace(0,srate,nfft),2*abs(fft(signal,nfft))/length(signal))
set(gca,'xlim',[8 12])

Thanks Mike,

I’m not sure I follow your comment on the post that I shared. To approximate the integrals by sums, it suggests to use 1/fs (dt) on the time domain signal and fs/nfft (df) on the frequency domain signal scaled by 1/fs.

I agree that it’s trivial given that people usually care about power relative to baseline, but I’ve been playing around with different methods that model background EEG activity as colored noise to detect bouts of oscillatory activity, and I had this (probably foolish) idea of fitting the 1/f curve to power estimates computed via multitaper, while detecting the oscillatory bursts in a spectrogram derived via wavelet convolution. Hence, my first question on whether there’s a trick for scaling the multitaper estimates so I can compare apples to apples.

Best,

–Gabriel

Sorry, I excluded a few intermediate algebra steps in my previous reply. Let’s have the time domain on the left-hand side of the equation (lhs) and the frequency domain on the right-hand side. The lhs is multiplied by dt=1/fs. The rhs is multiplied by (1/fs)^2 * (fs/nfft), which can be simplified to 1/(nfft*fs) = 1/nfft * 1/fs. Multiply both sides of the equation by fs because they both have a factor of 1/fs, and that leaves no scaling factor in the time domain and 1/nfft in the frequency domain.

I’m sure it’s possible to recover the true signal units with multitaper. You would need to figure out how much signal is lost in each time window from each taper. Curiously enough, someone recently asked a similar question in one of my Udemy courses, and then mentioned that a solution has been established in a paper and corresponding code here http://prerau.bwh.harvard.edu/multitaper/. I’m happy to pass along that link and I hope you find it useful, but please note that I haven’t gone through the paper or the code myself, so I cannot comment on it.

Mike

i was asking a similar question on udemy and one day later i see this one here :smiley:

I have not yet read the code in detail but they scaled the tapers with: *sqrt(Fs) and divided the fourier coefficients by Fs.
Maybe this will help you, but I’m not sure why they did it this way and if it’s right.

Niko

Thank you both! I actually worked with the first and last authors of the paper for a couple of years, so I could ask about the intuition behind it in case it’s not clear from the manuscript. It’s a small world!