Low Accuracy in EEG Emotion Recognition

Hi all, I’m working on emotion recognition from EEG signals (246 channels), but each subject has a different electrode setup. Only 24 electrodes are common to all subjects, and using them alone gives very poor results. So I tried grouping electrodes by brain regions and applying PCA on the 24 channels within each region to get representative signals. Then I extracted features like mean, variance, energy, kurtosis, skewness, and IQR for classification. Still, my accuracy is quite low. Could this be due to channel variability, or am I missing an important preprocessing or feature extraction step between subjects?

I wonder how each subject can have a different “electrode setup” with 246 channels :thinking:

This is a very high amount of channels. If this corresponds to a similar amount of actual sensors on the head, then you should have a significant number of sensors that overlap sufficiently in space to count as “same position”.

Do you want to share more information on why you think that only “24” electrodes are common to all subjects?

1 Like