Referring to ANTSD book,
To have z-score (to be used later for group analysis) as an output for mutualinformationx.m, i use this line:
[m, e, b] = mutualinformationx(log10(abs(datax).^2), log10(abs(datay).^2), bins_pow, true);
and originally it does 500 iteration of calculating mutual entropy. I did that for my data and got spurious tiny clusters not so much looking like what i expected.
So I tried to do it on the sampleEEGdata.mat of the book and used the code for simple MI which gave the same result as chapter 29 figure 11, and then 500 iterations, 1000 and 2000.
(Of graphs, left is Power Amplitude MI, right Phase MI; same as the book. also black lines border p<0.01)
Significant clusters of power amplitude MI seems to have some consistency but phase based MI keeps changing not arbitrarily it seems but still could not wrap my head around about what is going on.
Hi Yllcare. The maps themselves look the same, so I guess the difference has to do with the clustering and plotting.
As for your data, are you also applying a cluster correction method? Small clusters that are difficult to interpret are sometimes filtered out in cluster correction.
Yes, color-wise graphs look same but their z-scores are different enough to make significant areas of phase based MI alternate as it shows. BTW, those clusters are not decided by cluster correction but by simply marking any significant z-score out of mutualinformationx.m resulted in those clusters.
In the book 29.12, it says “After permutation testing, the resulting standardized measure of mutual information, MIz , can be statistically evaluated on its own or used in parametric statistical tests at the group level.”, so I assumed I can directly mark those pixels, however I might be misinterpreting something here. Here are 2 results in the same graph style (MI of EEG and EMG and 1000 iteration, p<0.01) from my data results (upper left is Power Amplitude MI, upper right Phase MI, below are power correlation and phase synchronization).
As you suggested, I will modify .m file and have permuted MI values as an output to have many clusters and do the correction based on these clusters’ size distribution instead of (mi-mean)/std which gave pixel-corrected-like results for me (To be honest, it is pixel based multiple correction to me in method and in graph both?).
Apologies if I wasn’t clear in the book. The point is that you can interpret the z-values more easily than the MI values. They are indeed statistical standard deviation units. The issue with the statistical thresholding is about multiple comparisons – each TF plot you show has thousands of tests, and so it’s possible that some of the little blips here and there are alpha errors that might not exceed a cluster-level threshold. Thresholding the map at p<.01 is indeed a pixel-level threshold, uncorrected for multiple comparisons.
Do you have multiple subjects? Usually a good strategy here is to compute these z-maps for each individual, and then at the group level you can test the consistency of the z-values over subjects (e.g., doing a t-test at each time-frequency point). That should improve the results because the small single-subject alpha errors are less likely to be consistent over subjects. Well, of course, if they are consistent over subjects, then they’re most likely not alpha errors but real effects
Hope that helps,