Spatio-temporal cluster permutation at the group level

Hello.

I have a within-subjects experiment with N=20 and 3 conditions and I want to run a spatio-temporal cluster permutation test. I have the evokeds -ave.fif file for each participant, but Iā€™m confused about how to construct the input array X. Here is what Iā€™m currently doing:

event_id = ['Neutral', 'Rise', 'Fall']

evokeds_files = sorted(glob.glob('./path_to_evokeds/*-ave.fif')) #returns a list of the evoked file of 20 subjects

X = []
for subj in evokeds_files:
    X.append([mne.read_evokeds(subj, condition=event_name, verbose=False).data for event_name in event_id])

X = [np.transpose(x, (0, 2, 1)) for x in X]

Iā€™m not sure if what Iā€™m doing is correct because the clusters returned are either a huge cluster of all electrodes, or very small clusters of 1 or 2 electrodes.

Moreover, is there a way to visualize clusters for a particular time window, say between 250ms-350ms (for P300 ERP) or visualize clusters only for a single channel?

Thanks!

  • You have 20 subjects and 3 conditions
  • Your evokeds shape is (n_channels, n_times)
  • the desired shape for X is, according to the docstring, (n_observations, p[, q], n_vertices)

here n_observations will be the subjects, and channels will be our ā€œverticesā€. That means ā€œtimeā€ and ā€œconditionā€ are our p and q dimensions. So your X should end up as (n_subj, n_cond, n_time, n_chan) or given what youā€™ve told us, (20, 3, n_time, n_chan). So something like this should work:

X = list()
for subj in subjs:
    this_x = list()
    for cond in conditions:
        evk = mne.read_evokeds(subj, condition=cond)
        this_x.append(evk.data.T)
    X.append(this_x)

Iā€™m pretty sure thatā€™s equivalent to what youā€™re already doing, but double-check. If they are the same, youā€™ll have to dig into the data to see why the clusters donā€™t look how you expect them to.

For your other questions, I donā€™t understand what it would mean to ā€œvisualize clusters only for a single channelā€. A cluster is, by definition, a collection of at least 2 vertices/channels. For particular time windows, yes it ought to be possible to do that; we have a helper function mne.stats.summarize_clusters_stc ā€” MNE 1.2.2 documentation for when the clustering is done in source space, but not for sensors Iā€™m afraid. I donā€™t have time at the moment to work up an example of how you would do that for sensor data (partly because Iā€™ve never clustered in sensor space so I canā€™t just copy some old code and tweak it to work with fake/sample data).

@mscheltienne @mmagnuski have either of you done sensor-space clustering, and have some useful sample code for visualizing the results?

Sorry, I donā€™t have code snippets matching this use case. Good luck!

Thank you! Itā€™s clearer now.

I meant something like only temporal clustering instead of spatio-temporal clustering. So if Iā€™m interested in only a parietal channel (say Pz), I could ā€˜selectā€™ it and find temporal clusters in the EEG timecourse of Pz.

Ah yes ok. That is certainly possible. You can, as you say, pick just the channel youā€™re interested in, such that your n_vertices dimension is just 1 element. Donā€™t completely remove that dimension though! Shape should be (20, 3, n_times, 1) for it to work properly.