What I mean in with “several rounds”: you put your reference electrodes above one of the eyes and on on the temporal on both sides. This gives you the possibility to have a reference signal form to extract the blinking artefact from the cognitive data when you run the ICA. This is the standard procedure. I completely understand.
1: as the subject does not sit still, but does a specific series of moves with his/her hand and torso, the EEG signal will have artefacts due to this hand and torso movements. This hand and torso movements will be measured with EMG as I would like to have a time course data how the task (the series of movements) make the subject physically tired. This also can act as reference data (I assume) to the ICA to clean the EEG data.
2: I intend to use eye tracking, combined with a stereo camera so I can make a 3D gaze heat map that I can use in the digital twin of the real production environment -with a 1:1 relation- to analyse the concentration/mental workload of the subject. As I use eye tracking, the frequency and the duration of the blinks can give data how tired the subject mentally is. this is also visible in the raw EEG data.
My assumption: if the blinks are clearly identified because of the reference electrodes, I can extract the frequency and the duration of the blinks as useful data, so I don’t have to use computer vision to get this data.
round 1: I first run ICA I can clean the raw EEG data to have efficient data for ERP, focusing on brain data. Classic cleaning by the book, I save the data.
round 2: I run a second, differently weighed ICA so I have optimal weights to extract the blinking frequency and duration as a sufficient data, suppressing all the other data to have a “metronome” to sync the gaze data from the separately recorded data from the eye tracker and 3D stereo cam. I save the data.
round 3: I run a third, differently weighed ICA so I have optimal weights to extract the hand and torso movement so I can have a “metronome” to sync the separately recorded EMG time course data, to the cleaned brain data and the blinking data
This way I’ll have optimal, cleaned and synced data that I can use for build a fatigue profile of the subject, prior to the task.
This data then can be used to optimise the work where collaborative industrial robots and humans work together.
But maybe I’m just overthinking the entire thing as I’m new to cognitive neuroscience.
assumption2: Eye tracking, blinking data and alpha waves combined together give different aspects of the same fatigueness, but maybe can provide a deeper insight what to optimise in the assembly process, so the subject will have a better performance.