Permutation_t_test without tmax correction

Hi,

I am doing EEG analysis and computing ERP grand average to compare between two of my conditions (say A and B). I want to do a permutation statistical test to see any difference between the two conditions. I think the right function to use is mne.stats.permutation_t_test, however, because I do in parallel the test for each sample in my ERP, the documentation says the p-values are automatically corrected for multiple comparisons by the “tmax” method. Is there a way to disable the p-values correction to manually correct it with FDR correction later ?

Here is what my code looks like for now :

# shape is n_subject x n_samples
# cond_A_mean_erp[i, j] is the mean ERP of
# subject i at sample j for the condition A
cond_A_mean_erp = [...]
cond_B_mean_erp = [...]

# data.shape = n_subject x n_samples
# data[i, j] = cond_A[i,j] - cond_B[i,j]
data = [...]

n_permutations = 2000
_, p_values, _ = mne.stats.permutation_t_test(data, n_permutations)

The only solution I have seen for now is to call the function one sample at a time but that seems inefficient.

Thank you for your help and I hope that I am clear.

  • MNE-Python version: 0.22.1
  • operating system: Windows 10

For this I would just loop over sensors, then you can correct the p-values manually afterward

Thank you for the answer. I forgot to mention, but I already have picked one sensor (C1), but it does not solve my problem as I have to do my statistical test for each time sample in the epoch (unless I misunderstand you).

So I have no choice to loop over samples then ? :frowning_face: I wish there were a parameter in the function to disable the tmax p-value correction.

I’m not 100% sure what you want to achieve, but to my understanding, you either use a permutation test or FDR for cases like the one you describe.
Have you seen/compared these MNE examples for FDR correction and permutation t-testing on sensor data?
For another comaprison and examplary implementation of the two, I also find this tutorial helpful (it’s with power and an ANOVA but you can just look at the implementation of the correction procedures maybe?).

Thank you for the answer and links. The things is that I’m coming from EEGLab and I want to do the “exact same” computation as a learning process to understand how to use MNE (I want to replicate my data to be sure I did everything correctly). In EEGLab, the permutation statistics does not correct itself the p-values, as you can select to do or not the FDR correction. Maybe I should not even try to replicate EEGLab analyses and just go with the way MNE does it?
EDIT: To be honest I don’t fully understand the tmax correction this is why I want to confirm what I do by comparing with EEGLab
Thanks again !

Hello,

I managed to understand the code that does the permutation_t_test and get around the tmax correction. I adapted the code and this is what I got :

def _surrogate_stat(X, X2, perms, dof_scaling):
    """Aux function for permutation_t_test (for parallel comp)."""
    n_samples = len(X)
    mus = np.dot(perms, X) / float(n_samples)
    stds = np.sqrt(X2[None, :] - mus * mus) * dof_scaling  # std with splitting
    surrogate_abs = np.abs(mus) / (stds / sqrt(n_samples))  # surrogates
    return surrogate_abs

@verbose
def permutation_t_test_no_correction(X, n_permutations=10000, tail=0, n_jobs=1,
                       seed=None, verbose=None):
    from .cluster_level import _get_1samp_orders
    n_samples, n_tests = X.shape
    X2 = np.mean(X ** 2, axis=0)  # precompute moments
    mu0 = np.mean(X, axis=0)
    dof_scaling = sqrt(n_samples / (n_samples - 1.0))
    std0 = np.sqrt(X2 - mu0 ** 2) * dof_scaling  # get std with var splitting
    T_obs = np.mean(X, axis=0) / (std0 / sqrt(n_samples))
    rng = check_random_state(seed)
    orders, _, extra = _get_1samp_orders(n_samples, n_permutations, tail, rng)
    perms = 2 * np.array(orders) - 1  # from 0, 1 -> 1, -1
    logger.info('Permuting %d times%s...' % (len(orders), extra))
    parallel, my_surrogate_stat, n_jobs = parallel_func(_surrogate_stat, n_jobs)
    surrogate_abs = np.concatenate(parallel(my_surrogate_stat(X, X2, p, dof_scaling)
                                      for p in np.array_split(perms, n_jobs)))
    surrogate_abs = np.concatenate((surrogate_abs, np.abs(T_obs[np.newaxis, :])))
    H0 = np.sort(surrogate_abs, axis=0)
    # compute UNCORRECTED p-values
    if tail == 0:
        p_values = (H0 >= np.abs(T_obs)).mean(axis=0)
    elif tail == 1:
        p_values = (H0 >= T_obs).mean(axis=0)
    elif tail == -1:
        p_values = (-H0 <= T_obs).mean(axis=0)
    return T_obs, p_values, H0

This returns uncorrected p-values and now I can do _, p_values = mne.stats.fdr_correction(p_values) as I wanted. Maybe this could get added as a parameter to the permutation_t_test function ?

It seems to work correctly at least, thank you everyone for your help. :slight_smile:

1 Like

Nice. Didn’t check the calculations in detail but in this line
H0 = np.sort(max_abs, axis=0) I guess max_abs needs to be replaced by surrogate_abs.

1 Like

Indeed thank you I did this mistake while renaming the variables. Tried it again and it should be ok now. I edited my message. :slight_smile: