Splitting epoch data

  • MNE version:1.9
  • operating system: macOS 15

I have managed to read my EEG data with mne.io.read_raw_egi, managed to setup the montage with raw_data_t1.set_montage. From talking to the PI, it is my understanding that the recording is broken into an initial setup, then the recording starts (denoted by DIN1), then a transition of some sort happens during which the recording is still going on and then the second part of the experiment begins (denoted by DIN2). When I create the epochs I do it using 1 second based events per the PI instructions -which I have done with mne.make_fixed_length_events. The PI now wants the epochs based on the times the experiment was running - say DIN1 to about 3 minutes (180 seconds) and DIN2 to the end basically skipping the gap while some transition was going on. What is the best way to achieve this? I was thinking you can create raw objects based on the two events an then create the epochs -which seems cumbersome, or create the epochs and then split the data somehow. Any ideas?

	input_fname_t1 = data_path + t1_file

	# Load the raw data files 
	raw_data_t1 = mne.io.read_raw_egi(input_fname_t1, eog=None, misc=None, include=None, exclude=None, preload=True, channel_naming='E%d', events_as_annotations=True, verbose=None)
	     						   	     						   	
    t1_DIN1 = mne.find_events(raw_data_t1, stim_channel='DIN1', output='onset')
	t1_DIN2 = mne.find_events(raw_data_t1, stim_channel='DIN2', output='onset')

    raw_data_t1.set_montage(montage, match_alias=True, match_case=False, on_missing='warn') 

    epoch_events = mne.make_fixed_length_events(raw_data_t1, id=1, start=0, stop=None, duration=1.0, first_samp=True)

    epoch_t1 = mne.Epochs(raw_data_t1_DIN1, events=epoch_events, event_id=None, tmin=0, tmax=1, baseline=(None, None), picks=None, preload=True, reject=None, flat=None, proj=True, decim=1, reject_tmin=None, reject_tmax=None, detrend=None, on_missing='raise', reject_by_annotation=False, metadata=None, event_repeated='error', verbose=None)

Thanks

btw I tried doing something like …

t1_onset_DIN1 = t1_DIN1[0][0]/1000
print(f"Onset for t1_DIN1: {t1_onset_DIN1}")

test = epoch_t1[round(t1_onset_DIN1):180]

My thinking is this would go from the onset on DN1 (aprox 16 seconds into the recording to the 3 minute mark, but I am unsure if this is indeed what I need to do -i.e. all events in the epoch between DN1 and the 180 seconds mark.

I think you got it correct. You can always use raw.plot(events=events) to visualize the continuous data with event markers annotated (pay attention to the DIN1 and DIN2 channels). Compare that with epochs.plot() to see which epochs you have extracted to make sure you have the right segments.

Does this returnee a figure I can plot? My apologies for the question, I am running python from the command line trying to figure out all the details.

if you’re running from the command line (instead of an interactive console like ipython), then add the following parameter: epochs.plot(block=True). The script will show the plot and only continue after you have closed it.

Thanks. That seems to almost have done the job but I got an error

/mne_qt_browser/_pg_figure.py:4903: RuntimeWarning: Precision loss occurred in moment calculation due to catastrophic cancellation. This occurs when the data are nearly identical. Results may be unreliable.
  z = zscore(data, axis=1)

I would like to see the data so I can understand what is happening as data is processed, this is a bit of a mystery box so far. Thanks for the help though.