Resampling RuntimeError - please report to developers

Hi all,
some of my data was recorded with a samplingrate of 20,000 Hz, so now I want to bring that down to 1,000 Hz like the rest of the measurements I was provided with.

When trying to do so with the .resample() method I get the following error :

RuntimeError: Incorrect number of samples (24998912 != 24999876), please report this error to MNE-Python developers

So here I am. :grimacing:

This is the Traceback:

File “C:\Users\Audiometrie\Desktop\MA Leoni\Python\resampling.py”, line 42, in
raw_resampled.resample(samplefrequency)

File “”, line 24, in resample

File “C:\ProgramData\Anaconda3\lib\site-packages\mne\io\base.py”, line 1238, in resample
data_chunk = self.get_data(

File “”, line 22, in get_data

File “C:\ProgramData\Anaconda3\lib\site-packages\mne\io\base.py”, line 908, in get_data
getitem = self._getitem(

File “C:\ProgramData\Anaconda3\lib\site-packages\mne\io\base.py”, line 814, in _getitem
data = self._read_segment(start=start, stop=stop, sel=sel,

File “”, line 24, in _read_segment

File “C:\ProgramData\Anaconda3\lib\site-packages\mne\io\base.py”, line 453, in _read_segment
_ReadSegmentFileProtector(self)._read_segment_file(

File “C:\ProgramData\Anaconda3\lib\site-packages\mne\io\base.py”, line 2121, in _read_segment_file
return self.__raw.class._read_segment_file(

File “C:\ProgramData\Anaconda3\lib\site-packages\mne\io\curry\curry.py”, line 540, in _read_segment_file
_read_segments_file(

File “C:\ProgramData\Anaconda3\lib\site-packages\mne\io\utils.py”, line 222, in _read_segments_file
raise RuntimeError('Incorrect number of samples (%s != %s),

Is there anything I can do to resolve this?
Best wishes!

I’m working with MNE 0.24.1, Windows 10

Hello @LeLoewenherz, thanks for reporting! Is there any chance you could share the data that’s causing this problem?

cc @larsoner

Hello @richard and @larsoner ,

I hope it works like this?
https://drive.google.com/drive/folders/1DOPJlGZmDf_jfmze58FFSlJRURZBTkW2?usp=sharing

I use the read_raw_curry() method and I can also provide the info, that it works fine when only using 60s of the cropped data.

Can you provide the cropped data then instead, please? So we don’t have to download this huge file :slight_smile:

I added it to the folder :slightly_smiling_face:

Hello @LeLoewenherz,

I cannot reproduce the problem on my system (using the original, un-cropped data). I tested:

  • with preloading & multiple jobs: raw.copy().load_data().resample(1000, n_jobs=8)
  • without preloading: raw.copy().resample(1000)

on MNE-Python 1.0.3 as well as on the latest development version.

The only thing I observed was that without preloading, resampling was taking ages (several minutes), while with the preloaded data, the process finished after about 1 minute (2021 MacBook Pro 13" with an M1 chip).

Did you monitor memory usage during the procedure? I’m wondering if you were simply running out of memory and therefore the reading operation failed… Data with such a high sampling frequency is quite tough on memory usage – be sure to use a machine with lots of RAM.

Best wishes,
Richard

1 Like

Hi @richard ,
thanks a lot for your time!

In my definition “ages” was about 1h for the cropped data to be resampled and the original was an overnight job… :smile:

The preloading seems to solve both: the time issue (it still takes about 10 mins, but that’s fine) and the original error.

Best wishes!

1 Like

Be sure to pass n_jobs so you can make use of all (?) of your CPU cores.

Using all 4 cores does speed up the resampling but what is still taking 10min here is the preloading of the data.

Is that RAM related? (I don’t really know a lot about the technical things in programming)

It’s probably your hard drive. My computer has an excessively fast SSD, like all MacBook Pros.

I just ran a timing test, out of curiosity:

People often complain about how “expensive” Macs are, but when it comes to performance like disk I/O, they deliver an extremely high value for the money.

Mind you this is the cheapest, entry-level MacBook Pro from last year I’m using here.