Difference between mne_make_movie & python apply_inverse?

Hi MNE-ers

I have just switched from using the mne_make_movie command (version 2.7.3)
to compute-the-inverse-solution-and-morph-to-average, to using the python
'apply inverse' operator and the 'morph_data_precomputed' to do the same
thing. I am pleased to find that the results are now similar (as one would
hope), but noticeably better (my experiments involve auditory data, and
results that were about a centimeter away from Heschls Gyrus, had now moved
to exactly on top of Heschls Gyrus). Obviously I'm delighted, but I just
wanted to check that the python version should be expected to give better
results - as I had assumed the two results would be the same. Should they
be?

As far as I can work out, both my two pieces of codes applied the same
parameters. (although smoothing and bmin/max don't make an appearance in the
python code, the python log says '5 smoothing iterations done', so I assume
this is the default)

The command line version (split onto several lines for easier reading):

mne_make_movie

--inv /inverse-operators/3L-loose0.2-nodepth-reg-inv.fif

--meas Participant_1_EvokedAuditory.fif

--morph average

--morphgrade

--subject Participant_1

--stc Participant_1_EvokedAuditory.stc

--smooth 5

--snr 1

--bmin -200

--bmax 0

--picknormalcomp

Python:

snr = 1.0

lambda2 = 1.0 / snr ** 2

# Make inverse solution

inverse_operator =
read_inverse_operator('/inverse-operators/3L-loose0.2-nodepth-reg-inv.fif')

evoked = Evoked('Participant_1_EvokedAuditory.fif')

stc_from = apply_inverse(evoked, inverse_operator, lambda2, "MNE",
pick_normal=True)

# First compute morph matices for participant

subject_to = 'average'

subject_from = 'Participant_1'

vertices_to = mne.grade_to_vertices(subject_to, grade=4,
subjects_dir=subjects_dir)

morph_mat = mne.compute_morph_matrix(subject_from, subject_to,
stc_from.vertno, vertices_to, subjects_dir=subjects_dir)

# Morph to average

stc_morphed = mne.morph_data_precomputed(subject_from, subject_to, stc_from,
vertices_to, morph_mat)

stc_morphed.save('Participant_1_EvokedAuditory.stc')

Thanks for any help,

Andy

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20130124/dfda236a/attachment.html

That should be:

--morphgrade 4

[mailto:mne_analysis-bounces at nmr.mgh.harvard.edu] On Behalf Of
acgt2 at cam.ac.uk

hi Andy,

the only obvious difference is the bmin/bmax.

To match the mne_make_movie code you should do:

evoked = Evoked(?Participant_1_EvokedAuditory.fif?, baseline=(-0.2, None))

do you have MEG only? or EEG + MEG?

HTH
Alex

Hi Alex,

Thanks for your reply.

The data is EEG+MEG combined.

I will try adding in the baseline now and report back - although if this is
the difference it would mean that the result is worse when one adds it in
(which is possible, but would be a bit strange).

You say I should use 'baseline=(-0.2, None)' to match my make_movie command,
but I thought that 'baseline=(-0.2, 0)' would be closer, wouldn't 'None' in
the second parameter make the baseline from -0.2 to the end of my evoked
file? (the code says 'end of the interval' but I'm not sure what that refers
to).

Andy

hi Andy,

my bad you're right "baseline=(-0.2, 0)" should do it. If you high-passed
the data is very possible that baseline correction is not mandatory, even
better without.

Best,
Alex

Hi,

while we are at it, is there actually one to be preferred way of morphing, i.e. morph_precomputed vs morph_data? Also is there any theoretical pro / con for choosing fsaverage vs inter-subject-morphing?

Best,
Denis

hi Andy,

my bad you're right "baseline=(-0.2, 0)" should do it. If you high-passed
the data is very possible that baseline correction is not mandatory, even
better without.

Best,
Alex

hi denis,

while we are at it, is there actually one to be preferred way of morphing,
i.e. morph_precomputed vs morph_data?

morph_precomputed is faster but requires more memory. It gives the same output
as morph_data.

Also is there any theoretical pro /
con for choosing fsaverage vs inter-subject-morphing?

do you mean morphing to the average brain of your study or one random subject?
I am used to morph to fsaverage so I cannot really comment based on prior
experience.

Alex

The morph precomputed should give equivalent results to morph_data, but it
allows the user to only compute the morph matrix once, which can be more
efficient depending on the use case.

I generally morph to fsaverage for three reasons. One, I've stared at that
brain longest, so I recognize features on it more quickly. Two, when doing
across subjects tests, morphing all subjects to a different brain
(fsaverage) avoids any potential bias (such as not smoothing as much in a
transform) that could otherwise favor the one subject everyone else's data
is morphed to. Three, fsaverage is a grade 5 (20484 vertices total) source
space and thus higher-resolution than my other subjects, which typically
have around 7000 total dipoles. Morphing to a higher resolution should help
minimize loss of spatial information in the transform.

Eric

Alex, Eric,

thanks, your reply match my intuitions, I'v never shared though.
So the morph matrix saves you some time given you intend to morph many conditions from one subject to fsaverage, yes makes sense. Although I have to say, using n_jobs -1 morph data for all of my conditions is just a matter of 3 minutes...

Cheers,
Denis

The morph precomputed should give equivalent results to morph_data, but it allows the user to only compute the morph matrix once, which can be more efficient depending on the use case.

I generally morph to fsaverage for three reasons. One, I've stared at that brain longest, so I recognize features on it more quickly. Two, when doing across subjects tests, morphing all subjects to a different brain (fsaverage) avoids any potential bias (such as not smoothing as much in a transform) that could otherwise favor the one subject everyone else's data is morphed to. Three, fsaverage is a grade 5 (20484 vertices total) source space and thus higher-resolution than my other subjects, which typically have around 7000 total dipoles. Morphing to a higher resolution should help minimize loss of spatial information in the transform.

Eric

hi denis,

while we are at it, is there actually one to be preferred way of morphing,
i.e. morph_precomputed vs morph_data?

morph_precomputed is faster but requires more memory. It gives the same output
as morph_data.

Also is there any theoretical pro /
con for choosing fsaverage vs inter-subject-morphing?

do you mean morphing to the average brain of your study or one random subject?
I am used to morph to fsaverage so I cannot really comment based on prior
experience.

Alex

Hi Alex,

Thanks for your reply -

I have re-run the python results, and adding in the baseline doesn't really
affect them (I'm doing single trial analysis so this is probably to be
expected), so there remains a difference between the outputs of the two sets
of code.

Do I understand from your (first) reply that, to the best of your knowledge,
the two bits of code I have below should do absolutely identical things
(assuming I've added the baseline to the python code)? I'm very pleased with
the python results, but a lot of people in my group are using code
similar/identical to my original and were under the impression there was no
difference between the commandline functions and their 'equivalent' python
versions, so I would like to be sure. If the reason the two outputs are
different is due to the fact that the underlying python functions are an
improvement on the equivalent commandline functions (even in some seemingly
trivial way, like using hanning windows instead of rectangular windows or
something), then this might explain the difference (and is a good reason for
us to move to mne_python!). I should be clear that, if there are
differences, I'm not asking for a breakdown of what they are! Just
confirmation (or otherwise) that the two pieces of code are not expected to
give out exactly the same results.

You asked about the data: it is EEG+MEG combined.

Thanks for all your help with this, much appreciated.

Andy

hi Andy,

there should not be any difference in theory and we have now a big test
suite to check the results of mne-python with the results given by the
current C code.
However we never test a 100% of the cases... could you share
a minimal set of data and scripts to reproduce the difference between
2 estimated stc files (one with python and one with the C)?

Best,
Alex

Hi NME-ers

We've found the reason for the difference between my c_mne and python_mne
results. Alex is correct - there is no difference between the two versions
of code - but the flag for '--signed' is 'off' in the C version by default,
and 'on' in the python version by default. As a result, when averaged
between participants, the source time courses end up different.

Best,

Andy