My colleague and I are encountering a problematic situation while saving the epochs of a participant. While initially all the epochs are good, at the moment of saving, the epochs.save function removes 2 epochs.
We checked that :
epochs.drop_log() contained only empty lists
epochs.drop_bads() doesn’t drop any epoch
We set annotations to None by running: epochs.set_annotations(None).
When saving, no message indicates that bad epochs were dropped but yet, when we load the epochs, the two last epochs are dropped.
Something that seems to be at the origin of this problem is that we use « STI008 » as the stim channel and that there is a trigger in the channel « STI101 » that happens before the 2 last triggers of STI008. These 2 last ones are the ones that are later removed automatically by the saving function.
We think this is a bug. Here is a snippet of code that should allow you to reproduce the problem (mne version ‘0.19.0’). How can we send you the problematic data ? It is on the NeuroSpin server.
since your tmax value is 0.6 seconds, the last two events are too close to the end of the raw object to be successfully extracted into epochs. If you did epochs.load_data() (or passed preload=True when you created the epochs) then you would have seen this:
Loading data for 736 events and 651 original time points ...
2 bad epochs dropped
and then if you did epochs.plot_drop_log() or viewed epochs.drop_log in the console, you would see that two epochs (the last two, in fact) were dropped for reason TOO_SHORT. The crucial step that you missed is probably that you didn’t load the epochs into memory before checking the drop log. Until loaded into memory, the epoch-by-epoch rejection does not happen (whether based on peak-to-peak signal amplitude, duration, annotations, whatever).
So we stopped the recording too early and cut part of the data we wanted to analyze.
We indeed checked that, when loading the epochs data, the drop_log now shows “Too short” for the last 2 epochs.
Thanks a lot for your very helpful answer and for taking the time to explain everything in details.
I have a related question to the topic of dropped epochs because of too late recording or too early stopping the recording. If I add metadata to the mne.epochs function and then epochs get dropped because the epoch length can’t be extracted does the number of trials (metadata) automatically get updated? I assume it does but just want to make sure that I am not misssing something here