Permutation cluster 1 sample test // replicating univariate t-test results between a Scipy test and an MNE test

  • MNE version: 1.4
  • operating system: macOS 13

Hi all,

I am trying to run a permutation cluster 1 sample test on decoding scores (n_subjects * n_time points) to see which clusters are significantly decoded against chance level. However, it doesn’t seem to work and I have no idea why. It states to have found 1 cluster but the statistic values (T_obs) are unusually high and the cluster spans the whole time window (all the time points), which is certainly incorrect. Here is the code that I am using:

T_obs, clusters, cluster_p_values, H0=mne.stats.permutation_cluster_1samp_test(T1sp_df, threshold=thresh,stat_fun=mne.stats.ttest_1samp_no_p, step_down_p=0.05, n_permutations=1000, tail=0, n_jobs=None, seed=65, out_type="mask",verbose=None)

Prior to permutation testing, I ran 1 sample t tests with the scipy function (stats.ttest_1samp). It produced expected results in the form of tvals and pvals. Here is an example on a pval distribution (significant above the red line) produced by the scipy test (and the code snippet):

t, pval = stats.ttest_1samp(T1sp_df, 0.5, alternative='greater')
Figure_1

Now, if I understand correctly, MNE’s permutation cluster 1 sample test is by default based on MNE’s 1 sample_no_p t test. When I run the latter on my data to see if it would replicate the initial scipy test results, I see that it does not - it produces a series of statistic that are unusually high for all time points. Here is how I try to run this test:

test_no_p = mne.stats.ttest_1samp_no_p(T1sp_df, sigma=1e-3, method='relative')

Any idea what the problem can be? Maybe I am not seeing something very obvious? Any help will be greatly appreciated!

Best,
Emilia

Hello Emilia,

The two tests you are running (scipy.ttest vs. cluster permutation based test in mne) are not the same and therefore the results should be different. Do you want to implement a cluster based permutation test or do you want to run a t-test based on permutation testing?

Best,

Carina

Hi Carina,

Thanks for your reply!

In the end, I want to perform a cluster based permutation test so I tried MNE’s 1 sample cluster permutation test but I don’t think it is working correctly on my data (as I described in the original post, the T_obs values are too high and they don’t make sense).

best,
Emilia

Hi Emilia,

I am not sure this will work, but here goes.

Do you know what is your expected chance level? If so, you could create an array with same dimensions as your actual results, then try:

T_obs, clusters, cluster_p_values, H0 = mne.stats.permutation_cluster_test( [results, chance], out_type="mask")

Be sure to have time as the last dimension (as you have in your first post).

I have been experimenting with something like this these days, namely comparing decoding results obtained with different methods. I just tried out using a chance vector instead, and the results I get seem to make sense. I get clusters more or less where I would expect them to be by eyeballing the results.

Best,
Sotiris

Hi Sotiris,

Thanks so much for your suggestion - I actually tried something like this too, and it sort of works but I was thinking maybe there is a cluster permutation function that would work on just one condition. Following this discussion, however, I doubt.

best,
Emilia

1 Like

Hi Emilia,

Good point. Which actually reminded that I had done something similar a few years back. I had essentially written a permutation function for a similar case. I have a sample code here (though it’s in MATLAB and I haven’t yet had the time to revisit it properly; I believe it was ok though).

Maybe you can check it for inspiration. And if something looks fishy let me know! But, I am gonna be revisiting this anyway and writing a python equivalent to test.

Cheers,
Sotiris

Hi Sotiris,

Thanks for the sharing your code! I am not very familiar with Matlab syntax but am I correct in understanding that your function works with two conditions (acc1 and acc2)?

Best,
Emilia

Hi Emilia,

Yep. Taking a look back at all that code and the discussion you previously linked to, I see that when comparing to chance we too were using a Wilcoxon signed rank test (second condition was chance, or rather an estimated chance computed by shuffling the labels):

and when comparing between 2 conditions we were using the permutations.

The advantage of that is that we were also checking for the directionality (e.g. if the 2 conditions differ significantly at one time point, we wanted to know which one was “better”), whereas comparing to chance we only cared about the actual decoding being “better”. But I think the permutations if done with “one-sided” test should be fine.

Cheers,
Sotiris