Struggling with noisy data, extracting alpha and beta

I’ve been working on this for longer than I would like to admit.
I have data from 20 subjects. The subjects are performing a self-paced reach and grasp task, and the EEG data are quite noisy. After much processing and various strategies using ANTS, Makoto’s pipeline, etc. (currently, filter 2-50Hz, ref to ears, ASR, ICA, rej components) I can see alpha and beta desync in TF plots averaged over trials (using Ch. 34-style wavelets). There are usually 6 movements in a trial, and I baseline each to the 1 second before the entire trial begins.

|--rest 19s--|-- baseline 1s--|--move 1--|--move 2--|--move 3--|--move 4--|--move 5--|--move 6--|--rest...

Individual movements are a mess. I would like to show synchrony between phases of the reach and grasp movement, and changes in alpha and beta power. My efforts so far have been for naught.

Is it normal for each trial/movement to be unique/strange, but average to a reasonable time-freq plot?
Or would you expect each trial/movement to be similar?

Guidance for better denoising? Wavelet packet decomp looks interesting, but so far I’m unable to locate code.

I’m open to any suggestions, and would be most appreciative.

It’s not unusual to have nice-looking trial average plots and crappy single-trial data, particularly for EEG.

Are you doing this at the channel level? Perhaps a multivariate approach will be better. You can try narrowband filtering the data in alpha, running an ICA just on the narrowband data, then analyzing an alpha component (e.g., like this paper).

Or you can try a frequency-specific spatial filter based on generalized eigendecomposition, which I talk about in this paper, see also this paper.

Thanks!
I’ll look up those references.
I went through your DRSS course, which was very good. I learned a lot about linear algebra, PCA, eigen* and their application.
I’ve found I haven’t been able to apply the principles (pun) to my TF data. When looking at the components, each of my subjects seem to be special snowflakes. E.g. when I PCA using an 18Hz, 3Hz FWHM filter, I have one subject that has a strong component around C3. The rest are somewhat a mixed bag - frontal and/or parietal, some veering left or right to temporal regions.
When I created covmats for the rest period mentioned above, and eig()ed with covmats of the individual trials (offset by a kinematic parameter), the result was a component at C2.
Being a self-paced task, and not time-locked like, say, VEP, may be affecting my results.
It’s also quite possible that I’m not correctly applying the techniques.

Trying to get my poster finished and printed by Tuesday. :expressionless:

Do you have EMG data or other measures of muscle movements? Perhaps it’s possible to filter out data according to movement times or strengths. Or to compute cortico-muscular coherence.

In general, noise in EEG data come from (1) sensor noise, (2) brain noise/variability, (3) non-brain sources (eyes, muscles, heart), (4) cognitive/task variability. Different strategies help the difference sources of noise. It sounds like you’ve done all you can about #3; perhaps try to address #4 by selecting data. As long as the data selection method is blind to the hypotheses, it can be a good strategy.

1 Like