Errors with loading .snirf files in mne-nirs

  • MNE version: 0.24.0
  • MNE-NIRS version: 0.1.2
  • operating system: Windows 10

I consistently get this error when attempting to load .snirf files. The same error arises when trying to load nirx output folders:
TypeError : expected dtype object, got 'numpy.dtype[float64]'

Minimally reproducible example copy+pasted directly from the tutorials:

import mne
import mne_nirs
import os

fnirs_data_folder = mne.datasets.fnirs_motor.data_path()
fnirs_raw_dir = os.path.join(fnirs_data_folder, 'Participant-1')
raw_intensity = mne.io.read_raw_nirx(fnirs_raw_dir).load_data()

The error seems to happen at different points, e.g. when taking the annotations from the .snirf file. Here is the complete error output:

c:\Users\xxx\Desktop\BCI\Bachelor\_ml\fnirs_ml.py in preprocess(path, verbose)
      44     if verbose:
      45         ic("Loading ", path)
---> 46     raw_intensity = mne.io.read_raw_snirf(path, preload=True)
     47     raw_od = mne.preprocessing.nirs.optical_density(raw_intensity)
     48 

C:\ProgramData\Anaconda3\lib\site-packages\mne\io\snirf\_snirf.py in read_raw_snirf(fname, optode_frame, preload, verbose)
     52     mne.io.Raw : Documentation of attribute and methods.
     53     """
---> 54     return RawSNIRF(fname, optode_frame, preload, verbose)
     55 
     56 

<decorator-gen-271> in __init__(self, fname, optode_frame, preload, verbose)

C:\ProgramData\Anaconda3\lib\site-packages\mne\io\snirf\_snirf.py in __init__(self, fname, optode_frame, preload, verbose)
    366                         info["subject_info"]['birthday'] = birthday
    367 
--> 368             super(RawSNIRF, self).__init__(info, preload, filenames=[fname],
    369                                            last_samps=[last_samps],
    370                                            verbose=verbose)

<decorator-gen-197> in __init__(self, info, preload, first_samps, last_samps, filenames, raw_extras, orig_format, dtype, buffer_size_sec, orig_units, verbose)

C:\ProgramData\Anaconda3\lib\site-packages\mne\io\base.py in __init__(self, info, preload, first_samps, last_samps, filenames, raw_extras, orig_format, dtype, buffer_size_sec, orig_units, verbose)
    286         # If we have True or a string, actually do the preloading
    287         if load_from_disk:
--> 288             self._preload_data(preload)
    289         self._init_kwargs = _get_argvalues()
    290 

C:\ProgramData\Anaconda3\lib\site-packages\mne\io\base.py in _preload_data(self, preload)
    565             data_buffer = None
    566         logger.info('Reading %d ... %d  =  %9.3f ... %9.3f secs...' %
--> 567                     (0, len(self.times) - 1, 0., self.times[-1]))
    568         self._data = self._read_segment(
    569             data_buffer=data_buffer, projector=self._projector)

C:\ProgramData\Anaconda3\lib\site-packages\mne\io\base.py in times(self)
   1577     def times(self):
   1578         """Time points."""
-> 1579         out = _arange_div(self.n_times, float(self.info['sfreq']))
   1580         out.flags['WRITEABLE'] = False
   1581         return out

TypeError: expected dtype object, got 'numpy.dtype[float64]'

Hello @esbenkc and welcome to the forum!

Another user has seen a similar error message and fixed the problem by updating NumPy:

Maybe you can try this too.

If it doesn’t help, can you please share the output of

import mne
mne.sys_info()

Thanks!

Richard

cc @rob-luke

Edit: As a general remark, we strongly recommend to follow our official installation instructions, so you’ll end up with a new, dedicated MNE-Python virtual environment with the correct versions of all dependencies.

2 Likes

Thank you!

To fix it, I ran conda create --name=mne --channel=conda-forge mne like you suggested. It ran for too long attempting to fetch some packages so I cancelled the process and entered the environment that seemed to have all the necessary packages anyways. I don’t know why it took so long.

Numpy was version <1.19 because it’s a Tensorflow requirement so that limits the ability to work with both at once. I hope Tensorflow updates that dependency :sweat_smile:

Edit: Seems like Keras works fine with updated Numpy

It may run for a considerable amount of time on Windows. 20, 30 mins are nothing unheard-of :slight_smile: It’ll be much faster if you use mamba instead of conda, but we currently don’t mention this in our documentation.

1 Like

Okay, interesting! I have looked at mamba but seems like I’ll have to use it then :wink:

1 Like