I’m confused as to how the mne.stats.permutation_t_test works - specifically regarding paired permutation t-tests with multiple comparisons.
I have read the source documentation (mne-python/permutations.py at maint/1.0 · mne-tools/mne-python · GitHub) and the tutorial here (Statistical inference — MNE 1.0.3 documentation) but haven’t managed to satisfy my questions!
I have applied multiple methods for artefact correction to a dataset, and then extracted relevant metrics (say SNR). I now want to test for statistical differences between the methods, but cannot use parametric methods as I violate the assumptions, so I want to use permutation pairwise t-tests instead.
So, to input the data to the function, and ensure the data is paired - I first subtract each methods SNR from the baseline SNR - so I end up with a 36x4 matrix. Each column is thus col1: Base-Method 1, col2: Base-Method 2 etc… and each has 36 observations. If I input this to the function - is it working as I expect i.e. determining whether each method has significant differences from the baseline data, and correcting for the multiple methods I am checking?
Statistics is not my strong suit, so any input is appreciated, and if anything in this post is unclear I am happy to clarify. In the event the function isn’t working as I thought, is anyone aware of a python implementation that might perform this function? My boss has suggested PALM (permutation analysis of linear models), but this is a MATLAB implementation and I’d prefer to stick with python if possible to keep everything open source.
- MNE version: e.g. 1.0.3
- operating system: Debian