I have an experimental paradigm which lasts approximately 40 minutes.
I usually just let the MEG run straight, end up with a single fif
file, and then do all my analysis on this. We recently switched from
16 bit to a 32 bit architecture (side point: does this mean more
accurate recordings? what DOES that mean for the end user?), and I
learned something new: the machine will only save files less than 2 GB
in size. As such, I now have two files, myFile.fif (2 GB) and
myFile_1.fif (~1.6 GB). I have no idea where it split it, behavioral-
wise.
What's the best way to proceed from here? I can't analyze them as they
sit, since I can't specify multiple files in mne_process_raw if the
files contain different numbers of trials. Is there a program that can
concatenate them? Should I just try to use mne_read_raw in matlab and
hope matlab doesn't choke on almost 4 GB of data? Any ideas? Thanks -
Sincerely,
Eliezer Kanal
Graduate Student, Bioengineering
Laboratory for Computational Neuroscience
University of Pittsburgh
3520 Forbes Avenue, 2nd floor, R218
Pittsburgh, PA 15213
412-802-6482
Fax: 412-802-6785
mne_process_raw can process multiple files if they have different
numbers of trials. Just be sure to specify --raw for each file and --
ave (be sure to save those averages separately either using different
names in the .ave file or with the commandline) for each file. Then
you will produce two average files. Using --gcov will produce a
correctly weighted average of the trials from all of the raw files.
Dan
?
Hello folks -
I have an experimental paradigm which lasts approximately 40
minutes. I usually just let the MEG run straight, end up with a
single fif file, and then do all my analysis on this. We recently
switched from 16 bit to a 32 bit architecture (side point: does
this mean more accurate recordings? what DOES that mean for the end
user?), and I learned something new: the machine will only save
files less than 2 GB in size. As such, I now have two files,
myFile.fif (2 GB) and myFile_1.fif (~1.6 GB). I have no idea where
it split it, behavioral-wise.
What's the best way to proceed from here? I can't analyze them as
they sit, since I can't specify multiple files in mne_process_raw
if the files contain different numbers of trials. Is there a
program that can concatenate them? Should I just try to use
mne_read_raw in matlab and hope matlab doesn't choke on almost 4 GB
of data? Any ideas? Thanks -
Sincerely,
Eliezer Kanal
Graduate Student, Bioengineering
Laboratory for Computational Neuroscience
University of Pittsburgh
3520 Forbes Avenue, 2nd floor, R218
Pittsburgh, PA 15213
412-802-6482
Fax: 412-802-6785
I combine *.fif files by reading them into Matlab and then combining them there. Then I use the matlab routines provided by mne suite to write them back out to *.fif files so that I can use mne_browse_raw and mne_process_raw.
I would be happy to show you what I have done if you like since we are both here at CMU...
The MEG system is deliberately splitting the files into 2-GB chunks as
some systems and storage media have trouble handling files larger than 2
GB. In addition, the current version of the fif format limits the file
size to 2 GBs.
The 32-bit data format does mean somewhat more accurate recordings
although in most measurements the benefit is marginal. However, the
wider dynamic range is important when tackling with some unavoidable and
strong interference signals, e.g., from patients with magnetic implants.
As Daniel wrote, just average the two files separately and then average
the "subaverages", with appropriate weighting according to the number of
trials you have in those subaverages. This latter averaging is easy to
do in the Neuromag software (or in MNE).
I have an experimental paradigm which lasts approximately 40
minutes. I usually just let the MEG run straight, end up with a
single fif file, and then do all my analysis on this. We recently
switched from 16 bit to a 32 bit architecture (side point: does this
mean more accurate recordings? what DOES that mean for the end
user?), and I learned something new: the machine will only save
files less than 2 GB in size. As such, I now have two files,
myFile.fif (2 GB) and myFile_1.fif (~1.6 GB). I have no idea where
it split it, behavioral-wise.
What's the best way to proceed from here? I can't analyze them as
they sit, since I can't specify multiple files in mne_process_raw if
the files contain different numbers of trials. Is there a program
that can concatenate them? Should I just try to use mne_read_raw in
matlab and hope matlab doesn't choke on almost 4 GB of data? Any
ideas? Thanks -
Lauri's and Dan's emails pretty much answer everything. Regarding
Matlab, 16 vs 32 bits do not really make a difference there because
the data are usually converted into doubles anyway and are actually 8
bytes per reading. What this practically means is that for large data
sets the brute force method of reading everything in at once and then
processing it is not feasible but you need to process the data in
smaller pieces. In fact, mne_browse_raw/mne_process_raw do exactly
this by using a ring buffer data structure.
Since the split raw data files are really continuous but the data is
just located in different files, I have dreamed of changes to
mne_browse_raw/mne_process_raw to read the the files automatically as
needed without the user knowing about the splitting. There are already
tags in the fif files which facilitate this. However, this is
something which can be only dreamed of at a the moment For now,
Lauri's and Dan's suggestions using the --gave and --gcov options are
the most feasible solution if you are happy with the standard
processing.
Just an additional comment: fif files cannot be larger than 2 GB. The
MNE Matlab routines might (erroneously) write larger files but they
are really not valid because the 2-GB address space achievable with 32-
bit addresses is exceeded.
Thanks for all the help with the loading multiple files problem. It
worked like a charm.
However, I'm having a more serious problem now. It seems that the
windows version of mne_pd isn't able to read 32 bit files. (I came to
this conclusion because its not able to read my latest dataset and is
able to read ones recorded earlier, when our system was still 16-bit.)
Is there a way to downsample the fif file using mne? Also, if you have
any other ideas as to why this may not work, please feel free to shout
those out as well. Thanks!
Sincerely,
Eliezer Kanal
Graduate Student, Bioengineering
Laboratory for Computational Neuroscience
University of Pittsburgh
3520 Forbes Avenue, 2nd floor, R218
Pittsburgh, PA 15213
412-802-6482
Fax: 412-802-6785
I think you should consider upgrading your Matlab code to use the mne
Matlab read routines instead of mne_pd.
Meanwhile, mne_process_raw/mne_browse_raw can do downsampling as
described in the manual. The output files will have float samples
which I guess mne_pd will read.