"Value too large to be stored in data type" when running mne_mark_bad_channels in tcsh

To whom it may concern:

When I try to run mne_mark_bad_channels in tcsh on a raw .fif file (about
3.25 GB), I get an error like the following:

"/Volumes/data/....../A0007.fif ... [failed] fseek : Value too large to be
stored in data type"

As far as I can tell this seems to be some kind of Unix error, not
something specific to MNE...but this happens regardless of which drive I
happen to be trying to modify data on, and also other functions that also
need to open the data (such as mne_process_raw) work fine, so I don't think
it's necessarily a permission or filesystem issue. Does anyone have any
idea what might be causing this?

Thanks for your input,
Steve

Stephen Politzer-Ahles
New York University, Abu Dhabi
Neuroscience of Language Lab
http://www.nyu.edu/projects/politzer-ahles/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20131125/0ecb95e9/attachment.html

Hey Steve,

It is possible that this has to do with file locations being stored using
32-bit integers. I thought FIFF had an effective file-size limit of 2GB for
that reason (using 32-bit signed integers would make this the limit), but
I'm not 100% sure. If nobody beats me to a more definitive answer and you
can't find anything about this issue yourself by tomorrow or Tuesday, I can
look into it more then.

Cheers,
Eric

Hi Eric,

Thanks, I will keep trying various things out in the meantime. I was also
concerned about file size limits, but it seems weird that other functions
like mne_process_raw still work on my big files.

If I don't find a solution, is it possible instead to just run
mne_mark_bad_channels on the averaged data (and make sure to use the
averaged, rather than the raw, as the --meas file in later steps of the
pipeline)?

Best,
Steve

Stephen Politzer-Ahles
New York University, Abu Dhabi
Neuroscience of Language Lab
http://www.nyu.edu/projects/politzer-ahles/

Steve,

how did you write this file?

standard .fif file should not exceed 2GB.

Alex

Hi Alexandre,

It started as Yokogawa .con data, I wrote it to fif using mne.gui.kit2fiff
in the mne-python tools.

Best,
Steve

Stephen Politzer-Ahles
New York University, Abu Dhabi
Neuroscience of Language Lab
http://www.nyu.edu/projects/politzer-ahles/

hi Steve,

looks like you've hit a problem in our python fiff writing code that does not
splits .fif files to avoid large files.

We'll look into it asap

Thanks for the report.

Best,
Alex

Hi Steve,

Just FYI, MNE-C does not do the splitting in mne_kit2fiff either.

- Matti

Hi Matti,

Thanks for the heads-up. In this case, how can I analyse a large file?
Should I split the raw file into multiple raw files, and then somehow feed
all these raw files into mne_process_raw when I want to create -ave.fif and
-cov.fif files?

Thank you,
Steve

Stephen Politzer-Ahles
New York University, Abu Dhabi
Neuroscience of Language Lab
http://www.nyu.edu/projects/politzer-ahles/

Hi Alexandre (et al),

I uploaded one of my .fifs and the corresponding bad channel to Google
Drive, if it helps you for figuring out the issue. The command I issued
that caused the error was:

set subject=A0007
mne_mark_bad_channels --bad $SAMPLE/$subject/badch
$SAMPLE/$subject/$subject.fif

Best,
Steve
?
A0007.fif<https://docs.google.com/file/d/0B6-m45Jvl3ZmOFR4WUxadUVZZ00/edit?usp=drive_web>
??
badch<https://docs.google.com/file/d/0B6-m45Jvl3ZmdmxvZTlJcnMtRDA/edit?usp=drive_web>
?

Stephen Politzer-Ahles
New York University, Abu Dhabi
Neuroscience of Language Lab
http://www.nyu.edu/projects/politzer-ahles/