continuation (from Fieldtrip list) of writing fifs

Hi Alexandre, Peter,

While it's not nearly as problematic as going from 2 to 18 Gb, I also
experience file size increases (usually exact doubling minus the last
small buffer) when rewriting fif files, not only in MATLAB but also with
the Python tools. In my case, I've always wondered whether it's just a
difference in the default writing precisions between the modern MNE
package and our system (Neuromag 122), in which case I'm content with
storing the data in the higher precision. In any case, would it be
possible to include a precision argument in the Python writing tool?

Cheers,
Andy

Hey Andy,

Yes, it should be possible to implement a precision argument in mne-python.
I should be able to get to it this week.

Cheers,
Eric

Hey!

The reason why, e.g., mne_browse_raw (and Matlab writing routines) write raw files as 32-bit floats is that there might have been some intervening operations and precision might be lost if the data are written back as 16-bit integers.
MNE-Matlab does not write 64-bit (double precision floats) and I do not think MNE-Python does either. This data type is not supported by the fif raw data reading routines in MNE C code.

- Matti

Hey MNE-ers,

We have now implemented raw saving for double and short formats in
mne-python. However, take note:

1) 'single' (32-bit floating point) is the standard format for raw files in
MNE C code and mne-python. This is the safest choice.
2) 'double' (64-bit floating point) format, while higher precision, will
not be read by mne command-line tools (MNE C code).
3) 'short' (16-bit) format, while it requires half the space of 'single',
results in lost precision. This option is added mostly for users who read
in a raw file already in short format, modify it in some minor way (e.g.,
correcting a digitization position), and then write the file out. It is not
recommended to use 'short' format to save data that has been processed
(e.g. filtered).

Cheers,
Eric