Epochs.save drops some epochs

  • MNE-Python version: 0.19.0
  • operating system: Linux

Dear MEG developers,

My colleague and I are encountering a problematic situation while saving the epochs of a participant. While initially all the epochs are good, at the moment of saving, the epochs.save function removes 2 epochs.

We checked that :

  • epochs.drop_log() contained only empty lists

  • epochs.drop_bads() doesn’t drop any epoch

We set annotations to None by running: epochs.set_annotations(None).

When saving, no message indicates that bad epochs were dropped but yet, when we load the epochs, the two last epochs are dropped.

Something that seems to be at the origin of this problem is that we use « STI008 » as the stim channel and that there is a trigger in the channel « STI101 » that happens before the 2 last triggers of STI008. These 2 last ones are the ones that are later removed automatically by the saving function.

We think this is a bug. Here is a snippet of code that should allow you to reproduce the problem (mne version ‘0.19.0’). How can we send you the problematic data ? It is on the NeuroSpin server.

Thank you very much.


Fosca & Samuel

import mne

data_folder = "/neurospin/meg/meg_tmp/ABSeq_Samuel_Fosca2019/data/MEG/"
data_path = data_folder + "/data_bug_mne/bug_data_raw.fif"
tmin = -0.050
tmax = 0.600

raw = mne.io.read_raw_fif(data_path,preload=True)

events = mne.find_events(raw, stim_channel="STI008",

epochs = mne.Epochs(raw, events, None, tmin, tmax,
                    proj=True, baseline=None,
                    preload=False, decim=1,

data_path_save = data_folder + "/data_bug_mne/test-epo.fif"

Hello @Fosca and welcome to the forum!

This version of MNE-Python is ancient. Please update to 0.23 and see if the problem persists.


Dear @richard,

Samuel and I just checked that the problem persists with MNE 0.23.

Thanks in advance for your help.


Thanks for including a code sample that yields the error. can you provide a download link for the problematic file?

Thanks a lot @drammock and sorry for the late reply. Here is the link to the data. Let me know if you have any problem loading it.

This is unrelated to saving and re-loading the epochs.

Let’s look at your last 3 events, and how far away they are from the end of the file:

(raw.last_samp - events[-3:, 0]) / raw.info['sfreq']
array([0.796, 0.544, 0.293])

since your tmax value is 0.6 seconds, the last two events are too close to the end of the raw object to be successfully extracted into epochs. If you did epochs.load_data() (or passed preload=True when you created the epochs) then you would have seen this:

Loading data for 736 events and 651 original time points ...
2 bad epochs dropped

and then if you did epochs.plot_drop_log() or viewed epochs.drop_log in the console, you would see that two epochs (the last two, in fact) were dropped for reason TOO_SHORT. The crucial step that you missed is probably that you didn’t load the epochs into memory before checking the drop log. Until loaded into memory, the epoch-by-epoch rejection does not happen (whether based on peak-to-peak signal amplitude, duration, annotations, whatever).

Dear @drammock,

So we stopped the recording too early and cut part of the data we wanted to analyze.
We indeed checked that, when loading the epochs data, the drop_log now shows “Too short” for the last 2 epochs.

Thanks a lot for your very helpful answer and for taking the time to explain everything in details.

All the best,


1 Like