- MNE version: e.g. 1.7.0
- Operating system: e.g. macOS Sonoma 14.5
Hi! I’m currently working on a code to extract segments from a recording of an experiment with two conditions. The experiment was conducted in blocks, each block consisting of a number of trials of one condition followed by the same number of trials of the other condition.
The idea is to extract the segments corresponding to each condition from all the blocks and then concatenate these segments to create two different raw signals.
I tried different methods to do this, but I noticed that the number of event markers did not make sense. For instance, I am using different event markers. There are markers that denote the beginning of each trial, making a distinction for each condition. In my original raw signal, I have 200 of these markers for each condition in total. However, when I split the signals, these 200 markers are not preserved correctly, and I get inconsistent results.
I thought that maybe this problem was caused by using event markers to delimit the time points, which might not be working properly. Therefore, I examined the full list of event markers and annotated the corresponding samples of each point I wanted to create the segments to concatenate my signals. The code I have for that is the following:
import numpy as np
import mne
import matplotlib
import matplotlib.pyplot as plt
import os
# Matplotlib configuration
matplotlib.use('Qt5Agg')
plt.ion()
# Load data and montage
recording_name = 'SAIH5julio_convert'
raw_path = 'RECORDINGS' + '/' + recording_name + '.cdt'
raw = mne.io.read_raw_curry(raw_path, preload=True, verbose=False)
# Use this only if you need to crop the recording
raw.crop(tmin=105)
# Verify the maximum time of the recording
sfreq = raw.info['sfreq']
max_samples = len(raw.times)
print(f'Maximum time of the recording: {raw.times[-1]:.2f} s ({max_samples} samples)')
# Segments for the speaking phase
segment_speak_1 = raw.copy().crop(tmin=(115064 - 150) / sfreq, tmax=(234674 + 150) / sfreq)
segment_speak_2 = raw.copy().crop(tmin=(352146 - 150) / sfreq, tmax=(464556 + 150) / sfreq)
segment_speak_3 = raw.copy().crop(tmin=(581264 - 150) / sfreq, tmax=(693770 + 150) / sfreq)
segment_speak_4 = raw.copy().crop(tmin=(810530 - 150) / sfreq, tmax=(921559 + 150) / sfreq)
segment_speak_5 = raw.copy().crop(tmin=(1043613 - 150) / sfreq, tmax=(1155949 + 150) / sfreq)
segment_speak_6 = raw.copy().crop(tmin=(1275942 - 150) / sfreq, tmax=(1387366 + 150) / sfreq)
segment_speak_7 = raw.copy().crop(tmin=(1518728 - 150) / sfreq, tmax=(1631276 + 150) / sfreq)
segment_speak_8 = raw.copy().crop(tmin=(1751178 - 150) / sfreq, tmax=(1864567 + 150) / sfreq)
segment_speak_9 = raw.copy().crop(tmin=(2006768 - 150) / sfreq, tmax=(2118953 + 150) / sfreq)
segment_speak_10 = raw.copy().crop(tmin=(2248463 - 150) / sfreq, tmax=(2362601 + 150) / sfreq)
# Segments for the listening phase
segment_listen_1 = raw.copy().crop(tmin=(240764 - 150) / sfreq, tmax=(336040 + 150) / sfreq)
segment_listen_2 = raw.copy().crop(tmin=(473640 - 150) / sfreq, tmax=(570401 + 150) / sfreq)
segment_listen_3 = raw.copy().crop(tmin=(701531 - 150) / sfreq, tmax=(798405 + 150) / sfreq)
segment_listen_4 = raw.copy().crop(tmin=(928744 - 150) / sfreq, tmax=(1028208 + 150) / sfreq)
segment_listen_5 = raw.copy().crop(tmin=(1164304 - 150) / sfreq, tmax=(1263180 + 150) / sfreq)
segment_listen_6 = raw.copy().crop(tmin=(1402851 - 150) / sfreq, tmax=(1510216 + 150) / sfreq)
segment_listen_7 = raw.copy().crop(tmin=(1639035 - 150) / sfreq, tmax=(1736053 + 150) / sfreq)
segment_listen_8 = raw.copy().crop(tmin=(1877680 - 150) / sfreq, tmax=(1975226 + 150) / sfreq)
segment_listen_9 = raw.copy().crop(tmin=(2126155 - 150) / sfreq, tmax=(2235319 + 150) / sfreq)
segment_listen_10 = raw.copy().crop(tmin=(2370345 - 150) / sfreq, tmax=None) # Do not add after the last segment
# Concatenate segments of each phase ensuring no overlap
raw_speak = mne.concatenate_raws([segment_speak_1, segment_speak_2, segment_speak_3, segment_speak_4, segment_speak_5, segment_speak_6, segment_speak_7, segment_speak_8, segment_speak_9, segment_speak_10])
raw_listen = mne.concatenate_raws([segment_listen_1, segment_listen_2, segment_listen_3, segment_listen_4, segment_listen_5, segment_listen_6, segment_listen_7, segment_listen_8, segment_listen_9, segment_listen_10])
# Save the segments of each phase if necessary
output_folder = os.path.join('RECORDINGS')
os.makedirs(output_folder, exist_ok=True)
raw_speak.save(os.path.join(output_folder, 'SAIH5julio_speak_raw.fif'), overwrite=True)
raw_listen.save(os.path.join(output_folder, 'SAIH5julio_listen_raw.fif'), overwrite=True)
print('Segments saved.')
But it turns out that the result is the same, it messes with the events in the resulting signals for each condition, and I’m not sure if there’s something that I’m missing or a mistake in my code and how I use the cropping and concatenating methods. I was hoping someone might know if there is a different way to solve this issue.