Memory Issue when computing coherence in source space using a MNE inverse solution

When replicating the tutorial (mne_inverse_coherence_epoch) using custom data which is accessable via this link, the compiler throw MemoryError.

`

numpy.core._exceptions.MemoryError: Unable to allocate 206. GiB for an array with shape (209786886, 66) and data type complex128

`

I suspect this is due to the wide frequency range,considered

fmin = (8., 13.)
fmax = (13., 30.)

When limited the frequency to

fmin = (8.)
fmax = (13.)

The compiler return the following

MemoryError

May I know the standard practice or how to address this issue?

The data and code to reproduce the above issue give below,

Link to epochs-fif: https://drive.google.com/file/d/1pkFNJDInIL2ipvaBv7ftqTPFQ9p10pkA/view?usp=sharing

import mne
import os.path as op
from mne.datasets import fetch_fsaverage
from mne.connectivity import spectral_connectivity
from mne.minimum_norm import (apply_inverse_epochs,)

# Download fsaverage files
fs_dir = fetch_fsaverage(verbose=True)
subjects_dir = op.dirname(fs_dir)

# The files live in:
subject = 'fsaverage'
trans = 'fsaverage'  # MNE has a built-in fsaverage transformation
src = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif')
bem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif')


epochs = mne.read_epochs ( 'test_data-epo.fif', preload=True )
cov = mne.compute_covariance(epochs, tmax=0.)
evoked = epochs['251'].average()  
# evoked.plot_joint()

fwd = mne.make_forward_solution(epochs.info, trans=trans, src=src,
                                bem=bem, eeg=True, mindist=5.0, n_jobs=-1)


inv = mne.minimum_norm.make_inverse_operator(evoked.info, fwd, cov, verbose=True)

method = "dSPM"  # use dSPM method (could also be MNE or sLORETA)


# Compute the inverse solution for each epoch.


snr = 1.0  # use lower SNR for single epochs
lambda2 = 1.0 / snr ** 2
stcs = apply_inverse_epochs(epochs, inv, lambda2, method,
                            pick_ori="normal", return_generator=True)

# Compute the coherence between sources

# fmin = (8., 13.)
# fmax = (13., 30.)
fmin = (8.)
fmax = (13.)
sfreq = epochs.info['sfreq']  # the sampling frequency

coh, freqs, times, n_epochs, n_tapers = spectral_connectivity(
    stcs, method='coh', mode='fourier',
    sfreq=sfreq, fmin=fmin, fmax=fmax, faverage=True, n_jobs=-1)

Software:
mne: ‘0.24.dev0’
Ubuntu 16.04

Hardware:
RAM: 32G
Intel Core i7-8700k

Appreciate for any suggestion

I noticed you are using n_jobs=-1, which means that as many jobs are computed in parallel as there are CPUs. However, this also means that each job gets a copy of the data, so if you have 8 cores your memory consumption will also be 8 times the original data. Try setting n_jobs=1 to see if this works. If not, your data is simply too large to fit into memory. In that case, you need to think about how to split up the computation into several smaller chunks that each fit into memory.

Thanks for showing interest in this issue @cbrnr .

Based on you comment, had made some changes to the code

  1. Use single core (njob=1)
  2. Process only 8 epochs of length 4s.
  3. Resample to 100Hz
    3a) Lowest possible resample to 80 Hz

Despite the 3 changes, the compiler still return MemoryError.

numpy.core._exceptions.MemoryError: Unable to allocate 206. GiB for an array with shape (209786886, 66) and data type complex128

The changes and error can be reproduce using the code below.


import mne
import os.path as op
from mne.datasets import fetch_fsaverage
from mne.minimum_norm import apply_inverse_epochs

# Download fsaverage files
fs_dir = fetch_fsaverage(verbose=True)
subjects_dir = op.dirname(fs_dir)

# The files live in:
subject = 'fsaverage'
trans = 'fsaverage'  # MNE has a built-in fsaverage transformation
src = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif')
bem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif')

epochs_all = mne.read_epochs ( 'test_data-epo.fif', preload=True )
epochs=epochs_all['251']
epochs.resample(80)
print(epochs.info['sfreq'])
cov = mne.compute_covariance(epochs, tmax=0.)
fwd = mne.make_forward_solution(epochs.info, trans=trans, src=src,
                                bem=bem, eeg=True, mindist=5.0, n_jobs=1)

inv = mne.minimum_norm.make_inverse_operator(
    epochs.info, fwd, cov, verbose=True)

method = "dSPM"  # use dSPM method (could also be MNE or sLORETA)

# Compute the inverse solution for each epoch.
snr = 1.0  # use lower SNR for single epochs
lambda2 = 1.0 / snr ** 2
stcs = apply_inverse_epochs(epochs, inv, lambda2, method,
                            pick_ori="normal", return_generator=True)

# Compute the coherence between sources
from mne.connectivity import spectral_connectivity
fmin = (8.)
fmax = (13..)
sfreq = epochs.info['sfreq']  # the sampling frequency

coh, freqs, times, n_epochs, n_tapers = spectral_connectivity(
    stcs, method='coh', mode='multitaper',
    sfreq=sfreq, fmin=fmin, fmax=fmax, faverage=True, n_jobs=1)

To confirm, this suggestion mean to process the lowest number of epochs and frequency band, at one time, right?

Which statement causes this error?

I cannot reproduce because I don’t have the data test_data-epo.fif.

Thanks for the quick reply @cbrnr .

The test_data-epo.fif is accessible via the Google link below

I had tried to constrain the range only in betrween 8-13 Hz, when calculating spectral_connectivity, yet the compiler still throw MemoryError .

numpy.core._exceptions.MemoryError: Unable to allocate 206. GiB for an array with shape (209786886, 66) and data type complex128

OK, I ran the script and watched the memory consumption go up. I stopped at 20GB because I have only 16GB RAM, but I assume this error only occurs after a while?

I have never used this function before so I’m afraid I cannot help. Hopefully someone with more experience chimes in – @larsoner maybe?

1 Like

The huge allocation is due to the upper-right triangle of the square (20484, 20484) matrix of complex128 (16-byte) data that is (effectively) created for each frequency where coherence will be computed. And here there are 16 frequencies between 8 and 13 Hz given your sample rate and number of time points: (8., 8.33333333, 8.66666667, 9., 9.33333333, 9.66666667, 10., 10.33333333, 10.66666667, 11., 11.33333333, 11.66666667, 12., 12.33333333, 12.66666667, 13.). Coherence is computed for each of these across all epochs, and then averaged. So we have about this many GB of memory required:

20484 * 20484 / 2 * 16 * 16 / 1024 / 1024 / 1024 ≈ 50

but I assume this error only occurs after a while?

The reason there is a steady climb on Linux rather than one huge jump at np.zeros-time is that calloc on Linux is a lazy operation, and you won’t see the memory usage go up until you actually start assigning non-zero values to later parts of the array. IIRC Windows does not do this lazily, so there will be one big jump at the beginning.

So you have a few options:

  1. Loop over single frequencies, then average the result yourself. This is what happens under the hood anyway – connectivity scores are computed for each FFT frequency independently, then averaged within the band(s) of interest. This will take quite a bit longer (in a worst case, 16x) but will require about 1/16th the memory.

  2. Create an OCT-6 source space rather than ICO-5. ICO5 has 20484 vertices, OCT-6 has 8192, yielding a memory reduction of 20484 ** 2 / 8192 ** 2 ≈ 6.25. Something like this would get you going with that modification:

    src = mne.setup_source_space('fsaverage', 'oct6', add_dist='patch', verbose=True)
    
  3. Compute “label activation” in source space, and compute connectivity that way. In reality with 30 EEG channels you’re not anywhere near the resolution of 20484 or 8192, so you could reduce this substantially for example to 362 labels with something like:

    mne.datasets.fetch_hcp_mmp_parcellation(verbose=True)
    labels = mne.read_labels_from_annot('fsaverage', 'HCPMMP1', verbose=True)
    ltcs = mne.extract_label_time_course(
        stcs, labels, fwd['src'], return_generator=True)
    coh, freqs, times, n_epochs, n_tapers = spectral_connectivity(
        ltcs, method='coh', mode='multitaper',
        sfreq=sfreq, fmin=fmin, fmax=fmax, faverage=True, n_jobs=1,
        verbose=True)
    

    this gave me a coh.shape of (362, 362, 1) in about 10 seconds with < 500 MB memory usage.

I would go for some variant of (3) personally.

2 Likes

Thanks for the time and effort explaining about this issue @larsoner , really appreciate it. The whole process complete way faster and without any MemoryError issue.

Some followup question from your post

IIUC, the first 3 lines responsible for the label activation calculation in source space. But, I am still not clear about the purpose of computing “label activation” in source space before connectivity analysis. To be honest, this is my first time encounter this term. Do you mind explaining a little bit more about this or if possible to kindly point me to relevant article for further reading. I try to search the net, but not so sure whether im using the right keyword.

To confirm, does the 362 lables comes from the ‘HCPMMP1’ labels?

Also,is it recommended to employ the 362 lables for data with 64 or 129 channels? Or, one should refer to the column source spacing /mm? as shown in the table attached at the end of this thread?

Maybe typo, but my limited knowledge told me it should be 8196 vertices total (4098 per hemi), instead of 8192.

For future reader (and for my reference as well). The number spacing code (ICO-5, OCT-6) was referred from the table

Extract from link

A handy way to understand how any function in MNE is used is to look at its linked examples. In this case, here is the list for extract_label_time_course:

https://mne.tools/stable/generated/mne.extract_label_time_course.html#examples-using-mne-extract-label-time-course

Can you look at those examples and see if any of them clear this up?

1 Like

Thanks for the link @larsoner, it help.

From technical perspective, do you think it is sufficient with only 30 EEG channels to compute connectivity in source space? I have seen multiple paper using 128 EEG channels, and the lowest is 64 EEG channel . Appreciate for any advice @larsoner .

From technical perspective, do you think it is sufficient with only 30 EEG channels to compute connectivity in source space?

30 EEG channels does not provide a lot of spatial specificity; doing source localization using a template MRI also reduces effective spatial resolution. I would say a source space analysis is maybe arguably better than doing connectivity in sensor space because you’re at least attempting to separate the neural sources (whose signals are intermixed and detected at the sensor level), but I’m skeptical about how much you can meaningfully extract information from either source- or sensor-space EDIT: connectivity result. I’d guess spatial resolution would be on the several-centimeter / lobe range at that point, and thus making a connectivity inference seems pretty difficult at that point, but I could be wrong!

2 Likes

Thanks for the insight, really appreciate it

I agree with @larsoner that any attempt to remove volume conduction is better than estimating connectivity in sensor space. 30 channels is definitely on the low end, I usually use at least 64 channels.

2 Likes