Memory leaking when using GPU acceleration of filtering and resampling

- MNE-Python version: 0.23.0
- operating system: Ubuntu

Hi, I’m using CUDA to accelerate the filtering. However, I found that the memory wouldn’t release even if I used

import gc
import mne

fpath = ***
raw = mne.io.read_raw_edf(fpath, preload=True)
mne.utils.set_config('MNE_USE_CUDA', 'true')
raw.notch_filter(50, n_jobs='cuda')

del raw
gc.collect()

If I don’t set n_jobs = 'cuda', things looked fine. Is this a problem of cupy?
And the usage of memory became extreamly large when filtering and it wouldn’t release until I killed the program.

This works well in Windows10.

I’m tagging @larsoner here, who might have some ideas as to what’s going on.

We don’t do any special gymnastics with cupy, so I strongly suspect this is a bug at their end. @BarryLiu97 can you try allocating a bunch of cupy arrays in a loop like:

for _ in range(1000):
    cupy.array(np.random.RandomState(0).randn(100, 1000))

and see if it causes memory usage to go up?

In windows, it didn’t cause memory usage, while in ubuntu, the initial memory is
image
after running the code in the console, it became
image
it wouldn’t go down until I kill the program.

I suppose that’s a Cupy problem. I see a thread here Memory leak in python for loop even if I delete all variables at the end of each iteration - Stack Overflow. Although it didn’t work for me.