Hi, I’m using CUDA to accelerate the filtering. However, I found that the memory wouldn’t release even if I used
import gc
import mne
fpath = ***
raw = mne.io.read_raw_edf(fpath, preload=True)
mne.utils.set_config('MNE_USE_CUDA', 'true')
raw.notch_filter(50, n_jobs='cuda')
del raw
gc.collect()
If I don’t set n_jobs = 'cuda', things looked fine. Is this a problem of cupy?
And the usage of memory became extreamly large when filtering and it wouldn’t release until I killed the program.
We don’t do any special gymnastics with cupy, so I strongly suspect this is a bug at their end. @BarryLiu97 can you try allocating a bunch of cupy arrays in a loop like:
for _ in range(1000):
cupy.array(np.random.RandomState(0).randn(100, 1000))