I was wondering about the best way to resample data with a high-frequency TMS artifact. I first epoched the raw data so that the TMS events would be kept but when I resample the epochs, a low-pass filter is applied which makes sense but causes a large filter artifact from the TMS pulse. Is there any way to not apply this filter?
Thank you for the suggestion. When I load in the raw data using the string preload option, there isn't a problem, but when try giving a string for preload to the epochs, I get:
Traceback (most recent call last):
File "<stdin>", line 4, in <module>
File "<string>", line 2, in __init__
File "/autofs/space/karima_001/users/alex/software/anaconda2.7/lib/python2.7/site-packages/mne-0.16.dev0-py2.7.egg/mne/utils.py", line 728, in verbose
return function(*args, **kwargs)
File "/autofs/space/karima_001/users/alex/software/anaconda2.7/lib/python2.7/site-packages/mne-0.16.dev0-py2.7.egg/mne/epochs.py", line 1960, in __init__
verbose=verbose)
File "/autofs/space/karima_001/users/alex/software/anaconda2.7/lib/python2.7/site-packages/mne-0.16.dev0-py2.7.egg/mne/epochs.py", line 386, in __init__
self.load_data() # this will do the projection
File "/autofs/space/karima_001/users/alex/software/anaconda2.7/lib/python2.7/site-packages/mne-0.16.dev0-py2.7.egg/mne/epochs.py", line 448, in load_data
self._data = self._get_data()
File "<string>", line 2, in _get_data
File "/autofs/space/karima_001/users/alex/software/anaconda2.7/lib/python2.7/site-packages/mne-0.16.dev0-py2.7.egg/mne/utils.py", line 728, in verbose
return function(*args, **kwargs)
File "/autofs/space/karima_001/users/alex/software/anaconda2.7/lib/python2.7/site-packages/mne-0.16.dev0-py2.7.egg/mne/epochs.py", line 1212, in _get_data
dtype=epoch_out.dtype, order='C')
MemoryError
In order to interpolate the artifact and downsample the data, I am following recommendations that the data be in epochs. Is there any suggestions for how to avoid this?
The raw .fif file is 30 GB and I have 60 GB of RAM. When I load the epochs, it takes up more than the 60 GB I have of RAM. I have tried iterating over the epochs, loading each section and then putting them back into an EpochsArray but then when I go to read them in I get the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/autofs/space/karima_001/users/alex/software/anaconda2.7/lib/python2.7/site-packages/mne-0.16.dev0-py2.7.egg/mne/epochs.py", line 448, in load_data
self._data = self._get_data()
File "<string>", line 2, in _get_data
File "/autofs/space/karima_001/users/alex/software/anaconda2.7/lib/python2.7/site-packages/mne-0.16.dev0-py2.7.egg/mne/utils.py", line 728, in verbose
return function(*args, **kwargs)
File "/autofs/space/karima_001/users/alex/software/anaconda2.7/lib/python2.7/site-packages/mne-0.16.dev0-py2.7.egg/mne/epochs.py", line 1170, in _get_data
epoch_noproj = self._get_epoch_from_raw(idx)
File "<string>", line 2, in _get_epoch_from_raw
File "/autofs/space/karima_001/users/alex/software/anaconda2.7/lib/python2.7/site-packages/mne-0.16.dev0-py2.7.egg/mne/utils.py", line 728, in verbose
return function(*args, **kwargs)
File "/autofs/space/karima_001/users/alex/software/anaconda2.7/lib/python2.7/site-packages/mne-0.16.dev0-py2.7.egg/mne/epochs.py", line 2606, in _get_epoch_from_raw
raise RuntimeError('Correct epoch could not be found, please '
RuntimeError: Correct epoch could not be found, please contact mne-python developers
I think it has to do with resampling the epochs to a different rate than the original raw but I was using the MNE examples section about creating an Epochs object from scratch as a template.
This was my attempt at resampling without a low-pass filter so that I could remove the artifact afterwards.
epochs = read_epochs(os.getcwd() + '/' + subject + '/TMS-epo.fif',preload=False)
new_sfreq = 1000.0
new_ntimes = int((epochs.tmax-epochs.tmin)*new_sfreq)+1
if int(epochs.info['sfreq'])%int(new_sfreq) != 0:
print('Sampling frequencies are not integer multiples, will not work.')
else:
factor = int(epochs.info['sfreq'])/int(new_sfreq)
times = epochs.times[::factor]
tind = epochs.time_as_index(times)
epochs_resampled_data = np.zeros((len(epochs.events),epochs.info['nchan'],len(times)))
for i,epoch in enumerate(epochs):
epochs_resampled_data[i] = epoch[:,tind]
ch_names = [ch['ch_name'] for ch in epochs.info['chs']]
ch_types = []
for ch in ch_names:
if 'EOG' in ch:
ch_types.append('eog')
elif 'ECG' in ch:
ch_types.append('ecg')
elif 'STI' in ch:
ch_types.append('stim')
elif 'EMG' in ch:
ch_types.append('emg')
else:
ch_types.append('eeg')
info = create_info(ch_names=ch_names,
ch_types=ch_types,
sfreq=new_sfreq)
epochs = EpochsArray(epochs_resampled_data,epochs.info,events=events2,tmin=tmin)
epochs.save(os.getcwd() + '/' + subject + '/' + 'TMS_resampled-epo.fif')
but again when I go to read the epochs in with preload=True, I get the RunTime error.
I would suggest posting code in gist: https://gist.github.com/ so that it
is easy to follow the bug report. Did you try using the decim parameter
when using the Epochs constructor? It did not help?
The decim parameter in epochs was what I was looking for.
I think I managed to sort out the runtime error as well, it looks like the last epoch didn't have enough data (the recording was stopped early) and when it was dropped there were some loading and saving errors but I just avoided these by preemptively dropping the event.