Utilizing Permutation Cluster Test with Correlation Matrix

If you have a question or issue with MNE-Python, please include the following info:

  • MNE-Python version: 0.22.0
  • operating system: Linux Debian

Hi everyone,

I’m trying to understand how I could apply a cluster permutation test to my data, and whether the MNE implementation would be potentially something I could use.

Briefly, I’m looking at the association of certain variables with time-frequency decomposition of power using a linear mixed effects model. So, for n patients, I get a single matrix of correlation values, p-values, and z-score values for a linear mixed effects model that relates each pixel of power with the given behavioural variable.

This leads to an image with a set of clusters in it that may relate to the variable (for ex. IQ, age, etc). I’ve been wanting to do cluster permutation on these data; however, because there is only a single matrix for each analysis, though I have >10 patients, I can’t seem to use cluster_permutation testing from mne.stats as it expects multiple “observations.”

I am interested in the significance of clusters within my correlation maps - but only have one map that is produced. My understanding is that cluster permutation can still be done on a single matrix level, but I can’t get this to work on MNE despite reshaping my data.

The array I feed has dimensions (n_freqs, n_times) and each value is a model correlation coefficient. I have tried reshaping this with a third dimension at the beginning so that it is (1, n_freqs, n_times) to illustrate that there is only one observation, but this doesn’t work either with the _1samp or regular permutation_test methods.

Any ideas?

one approach would be to threshold your map (based on your z- or p-values maybe) and then use scipy.ndimage.label to find if there are any clusters. This won’t run any statistics though. Then you could maybe see how likely it is to find clusters of a given size, by shuffling the pixels 1000 times and seeing how often you get clusters of that size. The MNE-Python functions are probably not helpful for that approach.

@agramfort or @larsoner may have better suggestions?

1 Like

Then you could maybe see how likely it is to find clusters of a given size, by shuffling the pixels 1000 times and seeing how often you get clusters of that size. The MNE-Python functions are probably not helpful for that approach.

I don’t think this will end up being valid because it destroys any spectro-temporal structure that exists within the time-frequency decomposition (neighboring time points and frequency bins will be correlated both due to the decomposition in all likelihood and also correlations in the data).

You can’t use the built-in MNE-Python 1-sample clustering code because it uses sign flips to do permutations, i.e., assumes you’re testing against a zero mean (e.g., in a paired t-test). Basically you need to figure out what exchangability under your null hypothesis allows you to shuffle.

I’m not 100% sure it’s right, but one thing you could try is do the clustering as @drammock suggested to obtain your veridical clusters, but when you go to do permutations with a maximum statistic (i.e., taking the maximum cluster size in each permutation to form your null which will eventually allow calculating p-values), you could shuffle the values across subjects that you’re correlating with (i.e., the behavioral or clinical measure of interest). I think that would be allowed under a null hypothesis that the TF variables are uncorrelated with your behavioral/clinical measure. For at least some background see here.

Thank you. This is very helpful - I’ve been using the Statistical Inference on the MNE page as I worked through this problem and it’s been very helpful.

I agree with the given data structure, I think approaching this problem in this way would be a bit too complex or prone to bias.

I’ve actually worked on a random-field based correction script with a Gaussian kernel instead. I think given the z-score matrix that I have, using this approach makes the most sense.

If you are familiar with those techniques I would be happy to hear feedback about it!

Thanks again for the quick responses.