Differences in Center of Mass Estimation from Labels, T_Obs or Source Estimate Activity

Dear All,

I am working with MEG data in Source Space and performed spatio-temporal clustering.
I would now like to describe the clusters (Estimating the Temporal and Spatial Center as well as some Atlas Annotation/Naming for the Cluster).
My approach is now to

  1. compute the Center of Mass, in the best case directly with Time Point
  2. Convert Vertex to MNI coordinates
  3. Match the coordinates with atlas naming.

Regarding the 1. Center of Mass, I found three approaches to calulcate it leaving me with three slightly different results.

  • Summarize Clusters → stc_to_label → COM
  • Create zero-filled exemplary stc and insert significant T Obs → COM (as I understood is this similar to the approaches referenced Larson & Lee, 2013
  • Create zero-filled exemplary stc and insert significant SS activity averaged over subjects → take abs since center_of_mass function does not allow negative values → COM

My Questions :wink::

  • Which method to calculate the COM would you recommend and can you explain how they differ (calulcate from Label vs. from source estimate)?
  • Can you recommend a way to get atlas regions for mni coordinates - I could not find an implemented method in mne…

Many thanks for your help!
Best,
Katharina

COM Results:

**Cluster 1**
COM CLU SUMMARY + STC_to_LABEL: left HEMI [ -11.93821907 -102.35795593    0.66815102]
COM T VALUES: left HEMI TIME: 0.18228643887406426 [ -14.9384737  -101.88134766    0.32687575]
COM SS Activity: left HEMI TIME: 0.188829977792586 [ -16.37820435 -101.36045837   -0.44912481]

**Cluster 2**
COM CLU SUMMARY + STC_to_LABEL: left HEMI [-56.25400925 -34.98986435   9.08508492]
COM T VALUES: left HEMI TIME: 0.271211701046541 [-51.71231842 -32.95758438   6.8202033 ]
COM SS Activity: left HEMI TIME: 0.2905256486202906 [-61.19871902 -30.453619     7.11226797]

Implementation:

src = mne.read_source_spaces(
    os.path.join(data_path_freesurfer, "fsaverage", "bem", "fsaverage-ico-" + ico + "-src.fif")) # ico = '5'
tmin = exemplary_morphed_surface_source_estimate.tmin
tstep = exemplary_morphed_surface_source_estimate.tstep * 1000
toi = [220, 680] # Time Interval of Interest Determined by F Test in Sensor Space
p_threshold = 0.01
a_vs_b = 'LW-HW' # Low - High Workload
time_window_to_use_samples = helper_proc.get_samples_from_timepoints(toi, 20, sample_rate) # Get Samples for TOI
# =============================================================================
# Determine Center of Mass
# =============================================================================
# Morphed Surface Source Estimate of Contrast of Interest averaged over subjects --> (vertices x time)
data = np.mean(np.array(subj_list_contrast), axis=0) #Average over Subject Dimension
T_obs, clusters, cluster_p_values, H0 = clu #result of spatiotemporal clustering

good_clusters_idx = np.where(cluster_p_values < p_threshold)[0]
good_clusters = [clusters[idx] for idx in good_clusters_idx]

if len(good_clusters) > 0:
    print('Number of SPATIOTEMPORAL CLUSTERS:', len(good_clusters), a_vs_b)
    print('=======================================================================')

    # =============================================================================
    # METHOD 1 - Summarize Cluster -> Label -> COM
    # =============================================================================
    stc_all_cluster_vis = mne.stats.summarize_clusters_stc(clu, tstep=tstep, p_thresh=p_threshold,
                                                           vertices=[s['vertno'] for s in src],
                                                           subject='fsaverage')

    lh_labels, rh_labels = mne.stc_to_label(stc_all_cluster_vis, src=src, smooth=True,
                                            subjects_dir=data_path_freesurfer, connected=True)
    for numb_hemi, name_hemi, hemi in enumerate(zip(['left HEMI', 'right HEMI'], [lh_labels, rh_labels])):
        for clu_label in hemi:

            vertex_center_mass = clu_label.center_of_mass('fsaverage', restrict_vertices=False,
                                                          subjects_dir=data_path_freesurfer)
            print('COM CLU SUMMARY + STC_to_LABEL:', name_hemi,
                  mne.vertex_to_mni(vertex_center_mass, numb_hemi, 'fsaverage', subjects_dir=data_path_freesurfer))

    for i_clu, spatiotemporal_clu in enumerate(good_clusters):

        #Take an exemplary morphed surface source estimate and mask data -> only insert significant T Values from Clustering
        exemplary_morphed_surface_source_estimate.data = np.zeros(exemplary_morphed_surface_source_estimate.data.shape) # Empty Data

        exemplary_morphed_surface_source_estimate.data[spatiotemporal_clu[1], spatiotemporal_clu[0]] = T_obs[
            spatiotemporal_clu[0], spatiotemporal_clu[1]] #insert significant T Values from Clustering -> Watch out for switched Dimensions (Spatial x Temporal) and (Temporal x Spatial)

        # =============================================================================
        # METHOD 2 - Take T Values for significant Cluster -> Insert it in an exemplary source estimate filled with zeros -> COM
        # =============================================================================

        vertex_center_mass, hemi, t = exemplary_morphed_surface_source_estimate.center_of_mass('fsaverage', restrict_vertices=False,
                                                                     subjects_dir=data_path_freesurfer)

        # add TOI MIN to get real time since data was cropped before clustering
        t = t + (toi[0] / 1000)
        if hemi == 0:
            name_hemi = 'left HEMI'
        else:
            name_hemi = 'right HEMI'
        print('COM T VALUES:', name_hemi, 'TIME:', t,
              mne.vertex_to_mni(vertex_center_mass, hemi, 'fsaverage', subjects_dir=data_path_freesurfer))

        # =============================================================================
        # METHOD 3 - Take SS Activity from Grand Average
        # -> Insert it in an exemplary source estimate filled with zeros and Mask Significant Vertices and Time Points based on Clustering -> COM
        # =============================================================================

        exemplary_morph.data = np.zeros(exemplary_morph.data.shape)
        exemplary_morph.data[spatiotemporal_clu[1], spatiotemporal_clu[0]] = np.abs(
            data[:, time_window_to_use_samples[0]:time_window_to_use_samples[1]][
                spatiotemporal_clu[1], spatiotemporal_clu[0]]) # does not allow for negative values (can also happen for T Obs) therefore abs()
        vertex_center_mass, hemi, t = exemplary_morph.center_of_mass('fsaverage', restrict_vertices=False,
                                                                     subjects_dir=data_path_freesurfer)
        t = t + (toi[0] / 1000)
        if hemi == 0:
            name_hemi = 'left HEMI'
        else:
            name_hemi = 'right HEMI'
        print('COM SS Activity:', name_hemi, 'TIME:', t,
              mne.vertex_to_mni(vertex_center_mass, hemi, 'fsaverage', subjects_dir=data_path_freesurfer))

1 Like

Hi,
In my case, I prefer to use mne.stats.summarize_clusters_stc to get the ROI label. But my case is just to perform permutation T test on SourceEstimate Object. I guess it’s OK since we would pass a threshold parameter when calling the function.