I have a within-subjects experiment with N=20 and 3 conditions and I want to run a spatio-temporal cluster permutation test. I have the evokeds -ave.fif file for each participant, but Iām confused about how to construct the input array X. Here is what Iām currently doing:
event_id = ['Neutral', 'Rise', 'Fall']
evokeds_files = sorted(glob.glob('./path_to_evokeds/*-ave.fif')) #returns a list of the evoked file of 20 subjects
X = []
for subj in evokeds_files:
X.append([mne.read_evokeds(subj, condition=event_name, verbose=False).data for event_name in event_id])
X = [np.transpose(x, (0, 2, 1)) for x in X]
Iām not sure if what Iām doing is correct because the clusters returned are either a huge cluster of all electrodes, or very small clusters of 1 or 2 electrodes.
Moreover, is there a way to visualize clusters for a particular time window, say between 250ms-350ms (for P300 ERP) or visualize clusters only for a single channel?
the desired shape for X is, according to the docstring, (n_observations, p[, q], n_vertices)
here n_observations will be the subjects, and channels will be our āverticesā. That means ātimeā and āconditionā are our p and q dimensions. So your X should end up as (n_subj, n_cond, n_time, n_chan) or given what youāve told us, (20, 3, n_time, n_chan). So something like this should work:
X = list()
for subj in subjs:
this_x = list()
for cond in conditions:
evk = mne.read_evokeds(subj, condition=cond)
this_x.append(evk.data.T)
X.append(this_x)
Iām pretty sure thatās equivalent to what youāre already doing, but double-check. If they are the same, youāll have to dig into the data to see why the clusters donāt look how you expect them to.
For your other questions, I donāt understand what it would mean to āvisualize clusters only for a single channelā. A cluster is, by definition, a collection of at least 2 vertices/channels. For particular time windows, yes it ought to be possible to do that; we have a helper function mne.stats.summarize_clusters_stc ā MNE 1.2.2 documentation for when the clustering is done in source space, but not for sensors Iām afraid. I donāt have time at the moment to work up an example of how you would do that for sensor data (partly because Iāve never clustered in sensor space so I canāt just copy some old code and tweak it to work with fake/sample data).
@mscheltienne@mmagnuski have either of you done sensor-space clustering, and have some useful sample code for visualizing the results?
I meant something like only temporal clustering instead of spatio-temporal clustering. So if Iām interested in only a parietal channel (say Pz), I could āselectā it and find temporal clusters in the EEG timecourse of Pz.
Ah yes ok. That is certainly possible. You can, as you say, pick just the channel youāre interested in, such that your n_vertices dimension is just 1 element. Donāt completely remove that dimension though! Shape should be (20, 3, n_times, 1) for it to work properly.