t-test in source space?

Dear MNE Experts,

The new mne-python tools look really nice to me and show lots of promise, especially for adding statistical functionality to MNE. Unfortunately, even after trying to go through the examples, I can't figure out a way to do a t-test between conditions in source space.

I would greatly appreciate any help or advice I could get with this.

Thanks in advance,
Will

Hi Will,

you can use scipy.stats.ttest_* functions to do analytical
t-tests from source estimates.

mne-python does not reimplement these standard stats functions.

I am personally curious to know more about what you do
precisely (how many subjects, mne vs dSPM etc.)

and glad that you found your way with mne-python !

Cheers,
Alex

Hi Alex,

Thanks for the reply. In addition to using the standard scipy.stats functions, it would be nice to be able to take advantage of some of the fancier tools you've developed in mne-python, like the permutation_t_test and permutation_cluster_test functions. The examples you give for their usage are in sensor space. I've tried simply feeding them stc data instead, with this kind of usage:

threshold = 2.1
T_obs, clusters, cluster_p_values, H0 = \
                permutation_cluster_test([cond1_stc.data, cond2_stc.data],
                            n_permutations=1000, threshold=threshold, tail=0,
                            n_jobs=7)

But when I do, shape(T_obs), I get the following:
(241,), which I suppose is the number of samples, but no information about the vertices. It's a similar story when I try using permutation_t_test.

Am I missing something obvious here, or do these functions simply not work for stc/source files? Perhaps I'm just calling them wrong or misunderstanding the output. Either way, I'd appreciate any help you could give.

Thanks again,
Will

Hi Will,

Thanks for the reply. In addition to using the standard scipy.stats functions, it would be nice to be able to take advantage of some of the fancier tools you've developed in mne-python, like the permutation_t_test and permutation_cluster_test functions.

permutation_t_test uses the t-max correction for multiple comparison so should
work easily in the source space.

for the cluster level stats you need to pass a connectivity matrix
to define spatio-temporal cluster. This feature is not very well documented
but I can send you a script if you want to give it a try.

The examples you give for their usage are in sensor space. I've tried simply feeding them stc data instead, with this kind of usage:

threshold = 2.1
T_obs, clusters, cluster_p_values, H0 = \
? ? ? ? ? ? ? ?permutation_cluster_test([cond1_stc.data, cond2_stc.data],
? ? ? ? ? ? ? ? ? ? ? ? ? ?n_permutations=1000, threshold=threshold, tail=0,
? ? ? ? ? ? ? ? ? ? ? ? ? ?n_jobs=7)

permutation_cluster_test cannot be used this way as the rows in stc.data
do not match the spatial structure of the cortical mesh.

But when I do, shape(T_obs), I get the following:
(241,), which I suppose is the number of samples, but no information about the vertices. It's a similar story when I try using permutation_t_test.

you should assemble a matrix where the rows are the observations
in each conditions. That's not what stc.data is. It does not really
make sense to use vertices as observations.

Am I missing something obvious here, or do these functions simply not work for stc/source files? Perhaps I'm just calling them wrong or misunderstanding the output. Either way, I'd appreciate any help you could give.

hope the comments above help.

Cheers,
Alex

That's really helpful, Alex. Thank you.

So just to be clear about the matrix I should construct, my understanding is that it should contain, for example, a row for each vertex (approx 20K) and a column for each sample (in my case 241 due to decimating the timecourse). I'll go ahead and assume that's right unless I hear otherwise.

Thanks again, and I'll let you know how it goes if you're interested.

-Will

Hi Will,

So just to be clear about the matrix I should construct, my understanding is that it should contain, for example, a row for each vertex (approx 20K) and a column for each sample (in my case 241 due to decimating the timecourse). I'll go ahead and assume that's right unless I hear otherwise.

you cannot do stats with only 1 stc file or 2 stc files. To do stats
across epochs
you need an stc file for each epoch (the epochs form your
observations) or if you
want to do stats across subjects you need one stc per subject and per condition.

then you can pass to the stats functions arrays of shape [n_conditions, n_tests]
where n_tests is typically n_vertices i.e. you do one test per
location in the brain.
n_tests car also be n_vertices x n_times

hope this help
Alex