Epochs have different time instants

:question: If you have a question or issue with MNE-Python, please include the following info:

Platform: Windows-10-10.0.25188-SP0
Python: 3.9.1 (tags/v3.9.1:1e5d33e, Dec 7 2020, 17:08:21) [MSC v.1927 64 bit (AMD64)]
mne: 1.1.1

:page_facing_up: Please also provide relevant code snippets – ideally a [minimal working example].

evokeds_array = []
evokeds_list = []
i =-1
for i, epochs in enumerate(epochs_arrayAug):
 
    #%% Create evoked object

    evokeds_array.append(epochs['win_card'].average())
    title = 'win_card'
    evokeds_array[i].plot(titles=dict(eeg=title), time_unit='s')
    evokeds_list.append(epochs['win_card'].average())
    epochs_arrayAug[i]['win_card'].plot_image(combine='mean')

grand_average = mne.grand_average(evokeds_array)

Hello everyone,

Im getting the following error:

ValueError: <Evoked | 'win_card' (average, N=347), -0.2 – 0.998 sec, baseline -0.2 – 0 sec, 32 ch, ~196 kB> and <Evoked | 'win_card' (average, N=212), -0.2 – 0.99801 sec, baseline -0.2 – 0 sec, 32 ch, ~196 kB> do not contain the same time instants

when trying to get the grand average from an array of Evoked objects, apparently, the error lies in that the Evoked objects have different times. I tried cropping the epochs, but the error persisted.

when I check the Tmin of the objects in the Evoked array, they slightly differ (i. e. tmin1 is -0.20000111376663554 and tmin2 is -0.20000120156012527) by they should be -0.2.

am I missing some step that fixes this? is there a way to safely remap the times to make all the Evoked events have consistent times?

Thank you.

2 Likes

Hello @steevenvs and welcome to the forum!

This difference should never happen, it appears to me you created the epochs differently or they are from different recording devices or were recorded with different sampling rates.

The slight deviation from requested values (e.g. not exactly -0.2) is because MNE-Python crops data to sampling points, not based on time per se. So this part is expected. But the values should be exactly identical across both sets of epochs.

How do you create epochs_arrayAug?

Best wishes,
Richard

(Please note, I edited & updated my response above)

Hi Richard,

Thank you for the fast reply, I recorded the data using an R-net (water based eeg), via lsl, and xdf format file.

this is the snippet where I create epochs_arrayAug

event_dict = {'win_card': 2, 'loss_card': 3, 'round_ended': 5, }
gc.collect()
epochs_arrayAug = []
epochs_arrayNoAug = []
for raw in tqdm(Raw):
        events_from_annot, a = mne.events_from_annotations(raw)
        epochs_arrayAug.append(mne.Epochs(raw,events_from_annot,event_id=event_dict, preload=True, tmin =-0.2, tmax = 1.0))

before that snipped, I set the reference to a common average reference and applied a notch filter & bandpass filter (1-15 Hz).

Can you identify any mistakes in this pipeline?

EDIT:
I read the sampling frequencies of each of the recordings, and this is the result (there is still a small variation). I hope this is not a sampling issue.

499.99793594621343
499.99720071213505
499.997215598917
499.99699611773354
499.99707288652064
499.9970382441441
499.9971764465858
499.99697005257366
499.99701314032194
499.99681528202376
499.9968765458216
499.9970190684926
499.9969823976701
499.99708531085446
499.9969399886545
499.99713282059855
499.9973322704321
499.9970015524352
499.99708611811474
499.9970159807189
499.99618535576116
499.9967125341354
499.99667109063085
499.99689924016474
499.99682871115084
499.99685877538155
499.9968733847953
499.9969604387679
499.9969945887415
499.99681678339084

Thank you.

The variation in sampling rate is most probably the issue.

Where does this come from?

You can try to resample the raw data to a common sampling rate, e.g., 500 Hz, via Raw.reample() before epoching.

Be absolutely sure you have triple-read and understood the big fat warning box in the method documentation there. If you don’t know what it means or get stuck, please do not hesitate to ask!

Good luck,
Richard

Edit:
Or simply resample the epoched data, then you avoid the potential issues that come with resampling continuous data :slight_smile:

1 Like

I need to add, if you go this route, you’ll have edge artifacts at the beginning and end of each epoch.

So maybe resampling the raw data and being super careful regarding event timing is the better choice? I honestly don’t know.