Source reconstruction

Dear Mike,
I am a researcher at Indian Institute of Technology Kanpur, India. I have been benefitting from your books from past several years and got your valuable suggestions in the issues faced by me in our earlier google forum.

Currently I am using Loreta software for my Phd work since last six months. I have been facing an issue. I tried resolving it but could not do so. It would of great help if you could help me to resolve it.

While applying Non-parmetric permutation t statistics in my analysis of comparing two conditions, I got significant difference between the two conditions. And this was in accordance with my hypothesis. However when I wrote my own code as guided by your book in Matlab in trying to replicate the statistics done by Loreta software I found different results ( loreta could perform better). I felt that t test statistics used by Loreta software is a bit more refined than that used by Matlab. I even got different result when I did the analysis t -statistics, Parametric (without any randomised permutation involved ) in comparing both the conditions. Your comments on the above issue would be highly appreciated. Waiting eagerly for your early response.

the paper which Loreta software referred for Non-parametric permutation based statistics is

Nichols, T. E., & Holmes, A. P. (2002). Nonparametric permutation tests for functional neuroimaging: a primer with examples. Human brain mapping , 15 (1), 1-25.

where under the section statistics talks about random effects model incorporating variability in activation from subject-to-subject

it reads out as below :
Statistic

Note that the permutations arrived at above permute across subjects, such that subject-to-subject differences in activation (expressed through the as yet unspecified statistic) will be represented in the permutation distribution. Because subject-to-subject differences in activation will be present in the permutation distribution, we must consider a voxel statistic that accounts for such inter-subject variability, as well as the usual intra-subject (residual) error variance. Thus we must use a random effects model incorporating a random subject by condition interaction term (many published analyses of multi-subject and group com- parison experiments have not accounted for variability in activation from subject-to-subject, and used fixed effects analyses).

Voxel-level statistic

Fortunately, a random effects analysis can be easily effected by collapsing the data within subject and com- puting the statistic across subjects (Worsley et al., 1991; Holmes and Friston, 1999). In this case the result is a repeated measures t -statistic after proportional scaling global flow normalization: Each scan is proportionally scaled to a common global mean of 50; each subjects data is collapsed into two average images, one for each con- dition; a paired t -statistic is computed across the subjects’ “rest”–“active” pairs of average images.

question 1) does this incorporation of random effect analysis is what is responsible for getting different results ( between Loretta and my code in Matlab)

question 2) how could I can include this correction of random effect in my code in Matlab… How it is done …

your comments on the above question would be greatly of great help. waiting eagerly for your response.

yours sincerely,

Ashish Gupta
Researcher Scholar,
Indian Institute of Technology

Hi Ashish. It sounds like your question is about the algorithms implemented in LORETA. I am not familiar with the workings of LORETA, so I can’t help you. I guess there is a LORETA forum that you could post to?

Thanks Mike for your reply. I tried contacting them (loreta discussion forum has been closed) but could not do so. Apart from the above algorithm detail issue I still have one question and need your suggestions on that:

Experiment >>> I have recorded 32 channels resting state EEG recordings of 16 subject under three conditions. After doing source reconstruction and finding the current density at several voxels (6239) …I want to do the comparison of these three condition. I have averaged my recording across time…so I want to apply Nonparametric permutation based approach across frequency * Voxels (6239 voxels) * conditions.

Since it is repeated design …so if there were only two conditions then we just exchanged the pairs (repeated design) ( since there are 16 pairs… so all possible combinations would be 2^16 permutation condition ) and then calculate the Tmax ( for voxels based statistics ) in each conditions, creating a Tmax distribution from all possible permutation ( or 1000 random permutation )…, to finally get the threshold t value for alpha = 0.05

Ques 1 >>>
Now when I have three conditions ( repeated design) …can I calculated a global t threshold for alpha = 0.05 ?

Ques 2 >>> is the procedure below is valid ?

I can get Tmax distributions while comparing based upon Non-parmetric permutation among all the three conditions (taking two at a time)…so three Tmax distribution for three comparison among them

Can I then just pool all the three Tmax distribution into one to get a global Tmax distribution …and from there calculate finally my global t threshold at alpha = 0.05.

Please share your thoughts upon this procedure. Waiting eagerly for your reply.

ashish

For one factor with three levels, you could use t-tests for all three pairwise comparisons, but then the p-value should be Bonferroni corrected to .05/3=.0167. Or you could use the nonparametric one-way ANOVA, which is called the Kruskal-Wallis test. A quick google search reveals that MATLAB has an implementation of this in the stats toolbox.

Hi Mike, Sorry to bother you once again …my experimental design consist of comparing 6239 voxels across 8 frequency bands and under three conditions so …I guess I need to go for Non-parametric based permutation test which would take care of multiple corrections issue (6239 * 8 * 3 ).

One way is to compare two groups at time through Non-parametric based permutation test which would take care of multiple corrections issue and divide it by 3 to do the correction through Bonferroni ( comparison under three condition)

I guess when you said this " For one factor with three levels, you could use t-tests for all three pairwise comparisons, but then the p-value should be Bonferroni corrected to .05/3=.0167. " this is what you meant Mike (above one).

Does creation a global T distribution which would take care of multiple correction issue across 6239 voxels* 8 frequency band *three condition is not a good idea. ? (I described a process in my previous mail…does it has some loop holes)

for Your second suggestion of using Kruskal-wallis …can it be used to take care of multiple correction issue of the level of 6239 * 8 * 3 ?

ashish

It’s often difficult to correct for everything. I think in this case I would use cluster correction over the 6k voxels using a Bonferroni p-value of .0167, and then run the analysis separately per frequency. You could also try correcting for frequencies, but I would be afraid that would get so conservative that even true effects might not survive, unless the effects sizes are really large.

1 Like

Hi,

It sounds like a classical headache presented in an fMRI experiment (due to voxels). Of course, you are trying to control your Type I error rate just like in all brain experiments. I think a reasonable statistical design (i.e. with a good trade-off between Type I and Type II error ratios) would be something like:

  1. For each band-specific voxel, find clusters of contiguous voxels in which each voxel is significant at some uncorrected p ( = 0.05/3). Note that this p corresponds to an F-value because you have more than two conditions, and you have an F-value for each time-point within your time-window of interest.
  2. Then, permutation testing can be used to determine the number of contiguous voxels that would be expected by chance (i.e. shuflle the data for e.g. 5000 times, as LORETA default setting, to find the null distirbution of Fmax).
  3. Calculate the Fmax on your original data and then you can find the significant clusters. A cluster is considered significant if its number of voxels is greater than 95% of the clusters found when data have been permuted many times (from the null distribution).

This approach inevitably assumes that real effects present across contiguous voxels. Also, voxels that temporally neighbors could be in the same cluster.

Such an approach has weak control of Type I errors, meaning that you are not confident about each significant time-frequency point. However, it seems good to me for detecting broad effects (over time. frequency and space). Of course, there is not a absolutely correct statistic, it’ s all about choices!

Waiting for relevant opinions…

That’s also a good approach. I was additionally thinking about an fMRI-style approach, with a t-test averaging across all conditions against baseline to identify “task-relevant” regions. This would only require p<.05. Then the activity of all voxels within each significant region can be averaged and then tested for condition differences without worrying about correcting for 6k voxels. It’s a principled approach but has the disadvantage that if two conditions show inverted effects, or if two conditions have no task effect but the third has a strong effect, then these regions might not survive the omnibus test.

Anyway, yeah, lots of choices and no single correct answer :wink:

Thanks for the idea!
Intuitively, if @ashish1986 would finally try to correct for these 8(!) frequency bands in the post-hoc pairwise comparisons in every region-specific tests, probably no region would survive the thresholds. So, I think it is too much to examine 8 bands in the same study looking for robust statistical conclusions (maybe some prior knowledge will reduce the under-investigation bands :thinking: )