- MNE version: 1.9.0
- operating system: macOS Sonoma 14.5
Hi, I’ve been working on an experiment where we have two different conditions (Speech and Listen) that were evaluated in the same recording. During the recording, we use different event codes to mark relevant moments in each block of each condition. For example, we have codes to mark the start and end of a block for each condition.
The goal is to delete the interblock periods, crop segments of the signal corresponding to each condition, and concatenate them to create two separate raw objects (one for each condition).
The code I am using for this is as follows:
import numpy as np
import mne
import matplotlib
import matplotlib.pyplot as plt
import os
# Route separation
slash = '/'
# Recording name
recording_name = 'S08_6Hz_convert'
# Load the data
raw_path = 'RECORDINGS' + slash + recording_name + '.cdt'
raw = mne.io.read_raw_curry(raw_path, preload=True, verbose=False)
print(raw)
# Extract events
events, _ = mne.events_from_annotations(raw)
print('\n\nTotal of events: {}\n'.format(len(events)))
# Plot events
mne.viz.plot_events(events)
plt.pause(0.5)
plt.show(block=True)
# Event codes for start/end of each block
start_speak = events[np.where(events[:, 2] == 5)[0], 0] # START speech
end_speak = events[np.where(events[:, 2] == 6)[0], 0] # END speech
start_listen = events[np.where(events[:, 2] == 7)[0], 0] # START listen
end_listen = events[np.where(events[:, 2] == 8)[0], 0] # END listen
# Lists to save the segments of the signal
segment_speak = []
segment_listen = []
# Iterates on the start/end markers to create the segments
for i in range(len(start_speak)):
# Determines the time stamps to divide the speech data
start_time = start_speak[i] / raw.info['sfreq']
end_time = end_speak[i] / raw.info['sfreq']
segment_s = raw.copy().crop(tmin=start_time, tmax=end_time)
segment_speak.append(segment_s)
# Determines the time stamps to divide the listen data
start_time_listen = start_listen[i] / raw.info['sfreq']
end_time_listen = end_listen[i] / raw.info['sfreq']
segment_l = raw.copy().crop(tmin=start_time_listen, tmax=end_time_listen)
segment_listen.append(segment_l)
# Concatenates the segments to separate the signals for each condition
raw_comb_speak = mne.concatenate_raws(segment_speak)
raw_comb_listen = mne.concatenate_raws(segment_listen)
# Saves if necessary
output_folder = os.path.join('RECORDINGS', 'EVENTS')
os.makedirs(output_folder, exist_ok=True)
raw_comb_speak.save(os.path.join(output_folder, 'S08_6HzSPEAK.fif'), overwrite=True)
raw_comb_listen.save(os.path.join(output_folder, 'S08_6HzLISTEN.fif'), overwrite=True)
# Plot the reconstructed signals
raw_comb_speak.plot(title='SPEAK SIGNAL')
plt.show(block=True)
raw_comb_listen.plot(title='LISTEN SIGNAL')
plt.show(block=True)
# Extract events from reconstructed signal SPEAK
events_sp, _ = mne.events_from_annotations(raw_comb_speak)
print('\n\nTotal of events: {}\n'.format(len(events_sp)))
# Plot events
mne.viz.plot_events(events_sp)
plt.pause(0.5)
plt.show(block=True)
# Extract events from reconstructed signal LISTEN
events_lis, _ = mne.events_from_annotations(raw_comb_listen)
print('\n\nTotal of events: {}\n'.format(len(events_lis)))
# Plot events
mne.viz.plot_events(events_lis)
plt.pause(0.5)
plt.show(block=True)
This code finds the event markers that define the time points for cropping the signal. It creates copies of the segments and then concatenates the segments for each condition, creating the new signals. Finally, it plots both the raw data and the events for each signal.
The problem seems to be that the events in the listen condition signal do not preserve the timing of the events. It appears that there is a gap of about 3 minutes at the beginning of the signal where no event markers appear, which doesn’t happen with the speech signal, which correctly preserves the event timing and shows the events.
These results are confusing to me because when I plot the raw data of the signal, you can see some events from the very beginning of the data, so I’m not sure why the event recording starts so late.
Here I include as examples the plots for the listen condition with data from one of our subjects, but I observe similar issues in different subjects.
I tried to use concatenate_events to see if that would solve the problem but I got the same results. I was wondering if there’s something I’m doing wrong, and if anyone knows how to fix it, I would appreciate the help.
Thanks in advance!