EEG Motor Imagery Classificaiton

Hi all,

I have a big question regarding training deep-learning models for Motor Imagery classification.

  1. For training and validation, should concatenate all the data from all the individuals? I meant, for example, for the Competition IV 2a dataset, there are 9 subjects. Should we concatenate all the data and then train our network or just choose one of the participants and train over the data just from that participant?
  2. For the accuracy in most of the literature, is it the training accuracy or validation, which I think it would be validation, and if so, how would they define it because it varies epoch by epoch and sometimes won’t reach a threshold, so is it the average overall epoch or the last 100 epochs for training deep learning?

Thanks in advance,
Best

Does anyone have any ideas about this point?

Hi,

  1. It depends on what you want to achieve. The task for the competition was originally to train a model on Subject X, Session 1 and then test it on Subject X, Session 2. This means you train 9 models, each model is trained and tested on the same subject. You can also choose to make it harder for yourself and train on subjects 1-8 and test on 9 to see if you can make an “out of the box” model. For this you can do leave-one-subject-out cross validation. I recommend following the design from the competition though.

  2. In general literature should (and probably does) report validation accuracies. In this case that would the accuracy on session 2 for each subject. You can look at the average over 9 subjects, but it’s also good to look at the individual accuracies. You’ll find it works much better for some than for others.

1 Like

@ivopascal
Thank you so much for your comprehensive response. I have checked the evaluation for all 9 subjects, but all the labels for all the participants are just 7, which corresponds to the left action. Am I right?