Thanks for your replies. Is this the right way to use spatial_exclude? I
think I might be missing something, because the clusters are *kind of* in
the ROI I would want them to be in but also spill out of the ROI:
make sure to get all mappings right which are involved
1) hemisphere vertices to whole brain indices (e.g. for ico5 / fsaverage the left hemisphere indices go up to 10241 and the right ones up to 20483, nevertheless vertices don't go beyond 10242.
2) all label vertices and the vertices used by the source space
use label.get_vertices_used to be sure
Importantly, the clustering test expects indices related to the data passed.
Is there a way to use spatial_exclude to "downsample" the number of vertices? That is, I'd like to run spatiotemporal cluster stats across the whole brain (not using an ROI) but use fewer than 20484 vertices. If I simply drop every other vertex, for example, will that retain an even spacing of vertices?
Just sub-selecting vertices by excluding some proportion of them (even if
evenly spatially sampled) will distort the signals that you're representing
a bit. It would be better to build another, smaller source space, and morph
from the full one to the smaller one, as the morphing procedure is designed
to preserve the overall activation levels. But I would try to avoid doing
this, as you may lose some effective spatial resolution.