Then i used to enter the bad channels manually like so:
badsCHs = input("enter bad channels...").split(sep=',')
if len(badsCHs[0]) > 0:
reref.info['bads'].extend(badsCHs)
Actually this is slow, and i know that i just say that bad channels are the ones that caused at least 5% of my epochs to be removed. So i want to do it automatically, but i can’t figure out where is this info stored. The plot_crop_log() function obviously makes use of exactly this, but i don’t understand where it is and how to access it I looked at the .drop_log, but then i guess i should do the stats myself? But surely in order to plot them, this is already available somewhere, i don’t need to compute them again…?
I found this question while searching for a way to do this process. @sappelhoff gave a great pointer to the relevant code to take and alter. Here’s a routine that will take a continuous EEG file named rawCorrected (that’s already undergone previous filtering and ocular artifact rejection) and will mark channels with an excessive proportion of ignored channels as part of ‘bads’, then rerun the epoch.
from collections import Counter # To count number of flagged epochs
# Bad channel detection settings
# Reasons for rejecting epochs
ignore=('IGNORED',)
# Minimum proportion of rejected epochs needed to identify a channel as bad
threshold = .20
# Event_id parameter was defined above, tmin and tmax are the time relative to each event at which to start and end each epoch,
# By default Raw and Epochs data aren’t loaded into memory (they’re accessed from disk only when needed), but here we’ll force
# loading into memory using the `preload=True` parameter so that we can see the results of the rejection criteria being applied
epochs = mne.Epochs(rawCorrected, events, event_dict, tmin, tmax,
baseline=baseline, picks=('eeg', 'eog', 'stim'), preload=True,
reject=reject, flat=None, proj=True, decim=1, on_missing='ignore', verbose=None)
# Some channels can have a *lot* of bad data. Rather than letting them tag all epochs as bad,
# perform an accounting of these channels and mark them as "bad" before re-epoching.
# Only correct if there are more epochs dropped than our threshold permits
if epochs.drop_log_stats()/100 > threshold:
# Tell user the percentage of rejected epochs to prepare for reprocessing
print(f'There were {epochs.drop_log_stats()/100:.0%} epochs dropped; scanning for bad channels and re-epoching...')
# Set the drop_log to the current set of dropped epochs
drop_log = epochs.drop_log
# Derive counts and channels from drop_log
absolute = len([x for x in drop_log if len(x) if not any(y in ignore for y in x)])
n_epochs_before_drop = len([x for x in drop_log if not any(y in ignore for y in x)])
scores = Counter([ch for d in drop_log for ch in d if ch not in ignore])
ch_names = np.array(list(scores.keys()))
props = np.array(list(scores.values()))/n_epochs_before_drop
# Identify channels with a larger than acceptable proportion of rejected epochs
badChannels = ch_names[props>threshold]
# Add bad channels into rawCorrected
rawCorrected.info['bads'].extend(badChannels)
# Re-epoch using the same command as above
epochs = mne.Epochs(rawCorrected, events, event_dict, tmin, tmax,
baseline=baseline, picks=('eeg', 'eog', 'stim'), preload=True,
reject=reject, flat=None, proj=True, decim=1, on_missing='ignore', verbose=None)