I am doing EEG analysis and computing ERP grand average to compare between two of my conditions (say A and B). I want to do a permutation statistical test to see any difference between the two conditions. I think the right function to use is mne.stats.permutation_t_test, however, because I do in parallel the test for each sample in my ERP, the documentation says the p-values are automatically corrected for multiple comparisons by the “tmax” method. Is there a way to disable the p-values correction to manually correct it with FDR correction later ?
Here is what my code looks like for now :
# shape is n_subject x n_samples # cond_A_mean_erp[i, j] is the mean ERP of # subject i at sample j for the condition A cond_A_mean_erp = [...] cond_B_mean_erp = [...] # data.shape = n_subject x n_samples # data[i, j] = cond_A[i,j] - cond_B[i,j] data = [...] n_permutations = 2000 _, p_values, _ = mne.stats.permutation_t_test(data, n_permutations)
The only solution I have seen for now is to call the function one sample at a time but that seems inefficient.
Thank you for your help and I hope that I am clear.
- MNE-Python version: 0.22.1
- operating system: Windows 10