Rejecting EEGs based on number of ICs removed versus proportion of variance removed?

Hi all,

I am wondering if anyone who cleans EEG with ICA has an opinion about this. Some researchers set a threshold where EEG datasets with more than a certain number of artifact ICs should be rejected outright. First, I’m wondering what people here think about setting such a threshold, and how strict it ought to be? Second, while QC’ing data, is it better to set a threshold for 1) the number of artifact ICs removed or 2) the proportion of variance explained by those same artifact ICs? In other words, I might have one dataset where I removed 10 ICs, but those ICs only explain 20% of the variance in the original data, whereas in another dataset, I removed 5 ICs, but those ICs explain 60% of the variance in the original data. My intuition would be that it makes more sense to throw out the second dataset than to throw out the first dataset. However, I’m not sure if I’ve ever heard of anyone doing it that way, so maybe the motivation for setting such a threshold is more about ensuring that the cleaned data are not low rank/low dimensional? Does anyone have an opinion on this?

To add a bit of a twist, suppose my analysis is within-subject. Does it make sense to check whether the number of artifact ICs removed differs between treatments/conditions? On the one hand, removing more ICs in one condition than another may bias the results, but on the other hand, one condition might just be inherently nosier than another … Would this question change at all if we swapped “number of ICs removed” with “variance explained by the removed ICs”?

Finally, would it make sense to add either of these two variables (number of ICs removed or variance explained by removed ICs) as a covariate in the analysis? My intuition says probably not, but I’m wondering what opinions folks here have.

Thanks,
Joel

Hi Joel. Interesting thoughts. Of course, all thresholds are arbitrary. I would be reluctant to endorse either of those strategies. On the other hand, my preferred strategy (careful inspection and an informed decision about whether to reject data) is also open to criticism of potential bias or reduced reproducibility. There are no universally good or accepted methods of removing parts of the data.

My advice is to be concerned with the quality of the data, and not with these metrics like count or % variance. Having reduced-rank data usually doesn’t negatively impact your analyses (depending on what kinds of analyses you are doing and how reduced the rank is).

After cleaning, I do think it’s a good idea to compare the number of components removed for the different conditions. That will indicate whether the data were systematically noisier in one condition or group.

Hi Mike, thanks so much for your thoughts. My own background is also to be less concerned with number of components rejected than with taking care that the data are properly cleaned. However, I wanted to consider some alternative perspectives. Re:comparing components across conditions, do you have a strong feeling about adding number of components as a covariate in an analysis of, say, power or coherence before and after brain stimulation?

Best wishes,
Joel

Yes, that’s also an interesting idea. What people sometimes do is test whether the number of components removed was different between groups. I don’t believe I’ve seen your suggestion implemented, but it could be good. I would be surprised if it is a significant predictor, though, because you selected those components to be non-brain-related. So the IV and DV are pushed towards orthogonal by design. But anyway, yeah, go for it.

1 Like