Controlling for multi comparison in multiple robust single-trial regression analyses

Dear Community, dear Mike,

I hope this finds you well. I am reaching out for advice on how to best control for MC in regression analyses. In our lab we use multiple robust single-trial regression both in the time and time-frequency domain to parse out neural representations of multiple task factors in EEG activity. These analyses result in regression coefficients/t-values for every time point and electrode (and frequency), revealing the time course and scalp topographies of the relationship between each predictor and neural activity. To control for multiple comparisons, we correct the resulting p-values for multiple comparisons using false discovery rate (Benjamini & Yekutieli, 2001).

However, FDR has a few downsides (i.e., the more significant time points there are, the higher the p-value cut off; if you include a time window with a very large effect (e.g., P3), other effects may be significant – weak FWER control; localisation is difficult).

Intuitively, dealing with MCPs using non-parametric approaches like cluster-based permutation tests seems to best fit to the biophysical constrains we encounter in neuroscience. Yet, it is difficult to come up with a “correct” shuffling of our regression results (a time series of b/t-values per regressor). I searched the literature an encountered a way to do one-sample permutation-tests. Here FEWR is controlled using the Tmax correction method (Blair et al., 1994). Essentially, the signs of the b/t values are randomly shuffled to create the empirical H0 distribution and critical t-values. This works well in the time domain (see attached). I was wondering if you can apply the same logic to our time-frequency regression outputs? Would I run the permutation test for each time series at each frequency and then do the cluster-based correction? Any advice and thoughts on this are much appreciated.


[1] Benjamini, Y., & Yekutieli, D. (2001). The control of the false discovery rate in multiple testing
under dependency. Annals of Statistics, 29(4), 1165-1188.

[2] Blair RC, Higgins JJ, Karniski W, Kromrey JD (1994) A Study of
Multivariate Permutation Tests Which May Replace Hotelling’s T2
Test in Prescribed Circumstances. Multivariate Behav Res, 29(2):141-163.

Hi Hans. In general, anything that works for 1D signals (i.e., an ERP) will work for 2D signals (i.e., a time-frequency map), as long as you’re still applying the correction to the map and not to each frequency separately.

In these kinds of situations, you basically have two options for permuting. You can permute the sign of the coefficient, which means you’re randomly swapping (A-B) for (B-A); or you can permute the time series through the cut-and-swap method, which means you’re randomly changing the time points at which an effect could be observed.

From your description, I’d recommend swapping the sign, to be consistent with how you approached the ERP statistics.

Thank you very much, Mike!