Hi all again,
Sorry for the influx of posts recently, but I need advice regarding the use of mne.stats.permutation_cluster_test to run analysis over multiple sensors.
For one sensor on one participant’s data according to the mne API, feeding of the one channel data is done through the following steps:
decim = 2 freqs = np.arange(7, 30, 3) # define frequencies of interest n_cycles = 1.5 tfr_epochs_1 = tfr_morlet(epochs_condition_1, freqs, n_cycles=n_cycles, decim=decim, return_itc=False, average=False) tfr_epochs_2 = tfr_morlet(epochs_condition_2, freqs, n_cycles=n_cycles, decim=decim, return_itc=False, average=False) ## Note: tfr_epochs_1.data.shape = (56, 1, 8, 211) - 56 epochs, 1 channel, 8 freq, 211 time entries) tfr_epochs_1.apply_baseline(mode='ratio', baseline=(None, 0)) tfr_epochs_2.apply_baseline(mode='ratio', baseline=(None, 0)) epochs_power_1 = tfr_epochs_1.data[:, 0, :, :] # only 1 channel as 3D matrix epochs_power_2 = tfr_epochs_2.data[:, 0, :, :] # only 1 channel as 3D matrix threshold = 6.0 T_obs, clusters, cluster_p_values, H0 = \ permutation_cluster_test([epochs_power_1, epochs_power_2], out_type='mask', n_permutations=100, threshold=threshold, tail=0)
In my case, I wanted to analyse multiple channels (ie. 10) in a given cluster over n participants, so I did the following:
## In the API, tfr_epochs_1 is a list of all epochs in one participant. But here, I average all epochs ## within a participant so I can then do a group average on this individual averaged tfr_epochs. ## Note: tfr_epochs_1.data.shape = (44, 10, 10, 1501) - 44 epochs, 10 channels, 10 freq, 1501 time entries. this_tfr1 = tfr_epochs_1.average() this_tfr2 = tfr_epochs_2.average() ## this_tfr1.data.shape is now (10, 10, 1501) ## My understanding is that I could just get the mean of all 10 sensors, which then generates epochs_power_1.data.shape of (10, 1051) epochs_power_1 = tfr_epochs_1.data.mean(axis=0) epochs_power_2 = tfr_epochs_2.data.mean(axis=0) ## Append this participant's data in the group list for permutation later on. all_epochs_set.append(epochs_power_1) all_epochs_set2.append(epochs_power_2) ## After the for loop to process all participant's data is completed, run stats T_obs, clusters, cluster_p_values, H0 = mne.stats.permutation_cluster_test( [np.array(all_epochs_set), np.array(all_epochs_set2)], n_permutations=1024, threshold=None, tail=1)
The script ran perfectly fine. The context is I was interested in looking at alpha activity in a given experiment condition and running the data through this process resulted in every single cluster (tested on 6) of channels (10 channels) producing alpha activity. Note that each cluster focused on different parts of the brain, covering the frontal, L & R temporal, sensori, occipital, parietal regions. It seemed that the more channels I included in the data, the portion of significant alpha activity becomes even larger.
I was wondering then, was there a need to first normalise the data from all channels before actually calculating the mean activity of all channels? I was concerned that perhaps this alpha activity was driven by alpha spikes in a few channels within the cluster, which resulted in significant results all over.