Question on study design for mu/beta ERD analysis

Hi Mike,

Hope you are doing well during this time.
I have some questions on study design for mu/beta ERD analysis.
To make progress during this time, I have been looking for publicly available data sets and found one data that I could link to my primary research interest.
The problem is that, as the study originally designed to answer other research questions, I am not sure whether the design is good enough to answer my research question. The original goal of the study was to investigate neuronal oscillations in imagery emotion processing. They used one minute long guided narration (connected speech) to help participants fully engage with each emotion. After the narration was presented aurally, 1-2 minute was given to participants and they were asked to use any personal experience that might help them to embody the emotion. The design was so unique that unlike other EEG studies, there was no time-lock stimuli. My plan is to use the guided narration as my stimuli. There were 15 emotions and I extracted sentences which has either action or abstract verbs in the narrations and grouped them into two categories based on the verb types: action vs. abstract. My goal is to look at mu/beta rhythm modulation in action verbs vs. abstract verbs processing. However, my concern is that in the narration I found only 15-19 sentences that are relevant to my research question for each condition. In addition, the timing and token types (e.g., walk, run, jump) in a sentence is all different (e.g., you are walking through a beautiful countryside, You feel practically like walking on air). I know that at least 60-70 time-locked trials per condition is required for this type of study when the sample size is around 13 as minimum. The sample size of the original data is 32 and I was wondering whether the sample size could compensate the limited number of trials per condition. Considering the fact that alpha rhythm is sensitive to inter-subject variability, and along with other shortcomings, I don’t think this is a workable plan. But I wanted to make sure whether there could be a slightest chance that I can turn this into something meaningful… Thank you so much.

Blockquote

Hi eeom. Great idea to use this time for reanalyzing existing/published data.

Topically, what you describe is way far from my area of knowledge, so I can’t give you any specific advice. But the amount of data that is necessary for analyses depends on the effect sizes, so 15 sentences could be enough. Also keep in mind that if each sentence takes 10 seconds, then you actually have 150 seconds of data per condition, which is quite a lot! Compare with a typical perceptual task where each trial is .5 seconds, thus 300 trials is 150 seconds. I’m just tossing a bunch of numbers around, but I hope you see the point. You can also try to segment each sentence into, e.g., 2-second epochs. Basically applying Welch’s method to increase SNR.

Furthermore, having a larger sample size can compensate for less data. Again, I don’t know about this area, but N=32 is more than a typical EEG study.

Good luck, I hope it works out well!

Hi Mike,

Thank you so much for your prompt reply. I am glad to hear that it may not be a bad idea and how you explain it to me made sense. Thank you so much!

Hi Mike,

I have a follow-up question and it is about baseline selection.
The baseline for mu ERD analysis, the baseline is usually a time period between -2 to -1 s before the onset to avoid the edge effect. However, because the original research design was so unique and as I am using their guided instruction as my stimuli, some of the clauses are in the middle of sentences (not at the beginning), so I am not sure whether I should use the pre-stimulus baseline or whether other baselines would lead to better results. Just to help your understanding, here is the example of the guided instruction which I categorized into two conditions for my study based on the verb type (action vs. non-action) in a sentence.

“Perhaps you imagine a perfect day. The sun is shining and you are surrounded by your favorite friends, enjoying your favorite activity. Perhaps you are walking through a beautiful countryside, or are on your way to a special party.”

“Perhaps you imagine a perfect day” was categorized into a non-action condition and “Perhaps you are walking through a beautiful countryside, or are on your way to a special party.” was into an action condition. Each sentences in the instruction is at least 1 second apart, thus enough for alpha band analysis. Participants were asked to close their eyes when the instructions were presented aurally.

There are two other baselines at the beginning of experiments and each has limitations. As I am running secondary data analysis, the design itself has limitations to answer my research question, so I am trying to find best ways to fix this. Here are the options of baselines and their limitations that I am think of. Could you please let me know which one could be the best option or whether I should just discard my plan to running secondary analysis for mu ERD on this data as it poses too many limitations.

The options of baselines are

  1. Time period when participants were asked to be relaxed with their eyes open at the very beginning of each experiment for 2 minutes. If I use this baseline, I am not sure how to manage the interference of the occipital alpha as eyes were closed during experiment condition and open during baseline.

  2. Time period when participants were asked to be relaxed with their eyes closed for 10 seconds at the beginning of each experiment. This was after the baseline 1) but before each guided instructions (option 3 below) were presented to the participants. 10 seconds seems enough and eyes were closed, so maybe this could be the best option as a baseline?

  3. pre-stimulus baseline; one presented right before each sentence with eyes closed. So, each sentence presents baseline but I am not sure whether I can use the same baseline for clauses in the middle of the sentences. As pauses between sentences are not actually a baseline, I don’t think I can use this as a baseline.

Any advise will help me a lot. Thank you for taking your time on this.

Best,
Minsoo

Hmm, I see. My advice here is to take a kindof cross-validation approach. Basically, pick 1-2 subjects at random from your dataset, try all three baseline normalization options (and any others you might think of), and see which method gives you the results you most want to see.

But wait, isn’t that overfitting? Yes, you’re overfitting on a small sample of data. Then the idea is to apply that analysis pipeline to all the other datasets you have. That’s like your out-of-sample test set, and you’re not overfitting to that group, so the procedure is not circular or biased.

Hope that helps!

Hi Mike,

Thank you so much for your advice and for explaining how I can justify the approach. I’ll try out your approach and come back to you if I have any other questions. Thank you!