Identical p values after permutation cluster test?!

Hi,

I need some help either understanding the exact procedure of the permutation cluster test or finding a mistake in my code:
I run mne.stats.permutation_cluster_1samp_test() to find significant activity in waveforms that I extracted from a label.
Now I realized, that the resulting p value is the exactly same for quite some of the conditions I tested, but not all of them.
I suspected some kind of copy & paste error in my code, but I cannot find any. I also double checked the input data: it is different for each condition.

I use the function in the following way:

F1, cluster1, p1, H01= mne.stats.permutation_cluster_1samp_test(A[:,:,cond], n_permutations=5000, tail=0)

With A (the waveforms of each subject) having the dimensions (subjects x time x conditions) and “cond” would be replaced with a different index for each condition.
Usually I do this in a loop, but the error(?) occurs also when I test it for two conditions manually.

In case it is necessary: I’m using Ubuntu 16 and tried with MNE 17.0 and 20.0

Does anyone have an idea or explanation for that behavior?

Best,
Laura

First: I strongly recommend updating to MNE-Python 0.22 (current stable version) if you can.

Second: in a permutation test, the number of possible distinct values that a p-value can take on depends on the number of permutations you do. So it’s not unusual for some p-values to end up the same.

1 Like

Thank you for your response.

  1. You’re right, updating would be a good idea… I tend to avoid it because our firewall makes this always way more complicated than it needs to be :roll_eyes:

  2. After some more tries with different numbers of permutations, I finally figured out the problem: I ran the analysis not only with the data of all participants but also in subgroups that only include 8/9 participants each.
    So even though I chose to do 50000 permutations, only 128 were done for the smallest group (were most of the same p values occured).

So now I have a different question:
Should you do a permutation analysis on those small sample sizes at all?
Because it is usually recommended to use 1000+ permutations…

Thanks in Advance!

Best,
Laura

in principle it is OK to do a permutation test on a relatively small number of observations (in your case observation=participant). You can compute the uncertainty around your permutation-generated p-value estimates, to get a better sense of how precise they are. A good explanation of that is the second part of this answer: https://stats.stackexchange.com/a/191343

The n in the denominator is the number of permutations, so you can expect with only 8 or 9 participants / 128 permutations the uncertainty of those p-values will be relatively large.

1 Like

Thank you, that helped me a lot! :slight_smile: