- MNE version: 1.4
- operating system: macOS 13
Hi all,
I am trying to run a permutation cluster 1 sample test on decoding scores (n_subjects * n_time points) to see which clusters are significantly decoded against chance level. However, it doesn’t seem to work and I have no idea why. It states to have found 1 cluster but the statistic values (T_obs) are unusually high and the cluster spans the whole time window (all the time points), which is certainly incorrect. Here is the code that I am using:
T_obs, clusters, cluster_p_values, H0=mne.stats.permutation_cluster_1samp_test(T1sp_df, threshold=thresh,stat_fun=mne.stats.ttest_1samp_no_p, step_down_p=0.05, n_permutations=1000, tail=0, n_jobs=None, seed=65, out_type="mask",verbose=None)
Prior to permutation testing, I ran 1 sample t tests with the scipy function (stats.ttest_1samp). It produced expected results in the form of tvals and pvals. Here is an example on a pval distribution (significant above the red line) produced by the scipy test (and the code snippet):
t, pval = stats.ttest_1samp(T1sp_df, 0.5, alternative='greater')
Now, if I understand correctly, MNE’s permutation cluster 1 sample test is by default based on MNE’s 1 sample_no_p t test. When I run the latter on my data to see if it would replicate the initial scipy test results, I see that it does not - it produces a series of statistic that are unusually high for all time points. Here is how I try to run this test:
test_no_p = mne.stats.ttest_1samp_no_p(T1sp_df, sigma=1e-3, method='relative')
Any idea what the problem can be? Maybe I am not seeing something very obvious? Any help will be greatly appreciated!
Best,
Emilia