Vertices in source reconstruction

If you have a question or issue with MNE-Python, please include the following info:

  • MNE-Python version: 0.21.0
  • operating system: Windows 10 Home

Hi,

I’ve recently done source localisation, and am trying to figure out whether there is a way to threshold the results by the number of active adjacent vertices instead of a p-value. I’ve not found anything available online so far.
The idea is to get the active vertices around the peak, as results do not seem to survive multiple correction. I’m currently computing f_oneway (scipy.stats) and then showing only significant p-values

def computeStatistic(x, y, z=None):
    if np.all(z) == None:
        print('comparing 2 groups')
    else:
        print('comparing 3 groups')
        

    stats_array = np.zeros((x.shape[0]),)
    pval_array = np.zeros((x.shape[0]),)

    for i in range(x.shape[0]):
        #print('vertex ind: ', i)
        if np.all(z) == None:
            stats, pval = scipy.stats.f_oneway(x[i,:], y[i,:])# data: vertex x number of subjects
        else: 
            stats, pval = scipy.stats.f_oneway(x[i,:], y[i,:],z[i,:])
        stats_array[i] = stats
        pval_array[i] = pval
    
    return stats_array, pval_array

time_values = ['avg'] 
stc_fsave_all_real_avg = np.mean(stc_fsave_all_real, axis=2) 
fvalues_r, pvalues_r = computeStatistic(stc_fsave_all_real_avg[2,:,:], stc_fsave_all_real_avg[1,:,:], stc_fsave_all_real_avg[0,:,:])
print('Significant p vals real: ', np.where(pvalues_r <= 0.05))

# find the peak on the lh
peak_vertex, peak_time = stc_f_real.get_peak(hemi=hemi)
print('peak_vertex: ', peak_vertex)

# get the vertex at the peak
peak_vertex_surf = stc_f_real.lh_vertno[peak_vertex]

Thanks a lot!!! :slight_smile:

:point_right: :point_right: :point_right: Please edit or remove the above text before submitting your posting. :point_left: :point_left: :point_left:

Hi @ruxt, did you try to use cluster-based permutation test? It does something similar to what you describe and effectively corrects for multiple comparisons. Here is an example using cluster-based test on sensor-level data (there is an example with source-space cluster-based, but I can’t find it at the moment):
https://mne.tools/stable/auto_examples/stats/sensor_permutation_test.html#sphx-glr-auto-examples-stats-sensor-permutation-test-py

Thanks @mmagnuski. I have the impression that the clustering works on timepoints and not on active vertices, and I would prefer not to correct as there’s way too many vertices so obviously the applied correction is too strong. However, I tried it, and found nothing, as I expected. Anyway, it would be cool to have a way to find how many vertices are active in order to correct for multiple comparisons in this way (say using a threshold >20 vertices)

The cluster-based permutation test works in any data space where adjacency makes sense. You can perform the test in time and vertices or time, frequency and vertices in principle.

Cluster-based permutation test is not as conservative as Bonferroni for example and the correction does not necessarily increasy linearly with the extent of the search space.
However, if you don’t want to correct for multiple comparisons but only select vertices for further analysis, you could still use the cluster-based permutation test. You can, for instance, lower the number of permutations to something very low, like 1 and ignore the p values, taking only the returned clusters into consideration. The cluster-based test gives you a list of clusters as one of the return arguments (if you use out_type='mask' these will be boolean masks for more than two dimensions or slices for 1d search space) - you can use this information to select clusters of vertices above certain size. Bear in mind that this allows to select vertices, but is not a correction for multiple comparisons per se.

1 Like