When computing a TFR using
mne.time_frequency.tfr_multitaper (and possibly other related functions), I was surprised that (1) the function reported epochs as bad and (2) these bad epochs were dropped in-place from the
mne.Epochs I passed to that function.
Here’s an example which demonstrates the behavior using the EEGLAB sample data set:
import numpy as np import mne from mne.time_frequency import tfr_multitaper raw = mne.io.read_raw("Eeglab_data.set", preload=True) events, _ = mne.events_from_annotations(raw, dict(square=1, rt=2)) epochs = mne.Epochs(raw, events, dict(square=1), -1.5, 2.5, baseline=None) tfr = tfr_multitaper(epochs, freqs=np.arange(1, 51), n_cycles=np.arange(1, 51), picks=, average=False, return_itc=False)
This drops two “bad” epochs. I have no idea why two epochs are bad though. Also, it doesn’t say which of the 80 epochs are bad. Most strikingly though is the fact that after the call to
tfr_multitaper, the two bad epochs have been dropped from
I did notice that setting
2 (instead of
2.5), respectively, does not create any bad epochs. Again, I have no idea why.
In summary, I’d be glad if someone could explain (1) how bad epochs are defined and (2) why they are also dropped from the original