I have a within-subjects experiment with N=20 and 3 conditions and I want to run a spatio-temporal cluster permutation test. I have the evokeds -ave.fif file for each participant, but I’m confused about how to construct the input array X. Here is what I’m currently doing:
event_id = ['Neutral', 'Rise', 'Fall']
evokeds_files = sorted(glob.glob('./path_to_evokeds/*-ave.fif')) #returns a list of the evoked file of 20 subjects
X = 
for subj in evokeds_files:
X.append([mne.read_evokeds(subj, condition=event_name, verbose=False).data for event_name in event_id])
X = [np.transpose(x, (0, 2, 1)) for x in X]
I’m not sure if what I’m doing is correct because the clusters returned are either a huge cluster of all electrodes, or very small clusters of 1 or 2 electrodes.
Moreover, is there a way to visualize clusters for a particular time window, say between 250ms-350ms (for P300 ERP) or visualize clusters only for a single channel?
the desired shape for X is, according to the docstring, (n_observations, p[, q], n_vertices)
here n_observations will be the subjects, and channels will be our “vertices”. That means “time” and “condition” are our p and q dimensions. So your X should end up as (n_subj, n_cond, n_time, n_chan) or given what you’ve told us, (20, 3, n_time, n_chan). So something like this should work:
X = list()
for subj in subjs:
this_x = list()
for cond in conditions:
evk = mne.read_evokeds(subj, condition=cond)
I’m pretty sure that’s equivalent to what you’re already doing, but double-check. If they are the same, you’ll have to dig into the data to see why the clusters don’t look how you expect them to.
For your other questions, I don’t understand what it would mean to “visualize clusters only for a single channel”. A cluster is, by definition, a collection of at least 2 vertices/channels. For particular time windows, yes it ought to be possible to do that; we have a helper function mne.stats.summarize_clusters_stc — MNE 1.2.2 documentation for when the clustering is done in source space, but not for sensors I’m afraid. I don’t have time at the moment to work up an example of how you would do that for sensor data (partly because I’ve never clustered in sensor space so I can’t just copy some old code and tweak it to work with fake/sample data).
@mscheltienne@mmagnuski have either of you done sensor-space clustering, and have some useful sample code for visualizing the results?
I meant something like only temporal clustering instead of spatio-temporal clustering. So if I’m interested in only a parietal channel (say Pz), I could ‘select’ it and find temporal clusters in the EEG timecourse of Pz.
Ah yes ok. That is certainly possible. You can, as you say, pick just the channel you’re interested in, such that your n_vertices dimension is just 1 element. Don’t completely remove that dimension though! Shape should be (20, 3, n_times, 1) for it to work properly.