Hi everybody
I wanted to try out an approach described by Hagberg et al., 2019 where they calculated percentile thresholds using false discovery rate (FDR) corrections for dSPM values.
Question: Does it make sense to FDR-correct percentile thresholds for dSPM values (rather than p-values obtained in a t-test)? And if yes, how would one do this in MNE-Python?
Maybe their description of the analysis in their paper explains it a bit better:
For visualization, the grand averaged dSPM values were thresholded at the 96th percentile. […] A label [brain region] was considered active if it contained dSPM values belonging to the 96th percentile of the data. […]
To correct for multiple comparisons across sources and time points, we calculated thresholds for the dSPM maps using False Discovery Rate according to Benjamini/Yekutieli (Genovese et al., 2002), using the MNE-Python software.
I am only familiar with an FDR correction of a t-test. Here, I have also used all 30 subjects as input (not the grand average), as the t-test function would otherwise have returned t-values along either the time OR the vertices, and not a time-points X vertices array.
My approach with a t-test was:
X.shape = (30, 720, 20484) #n_subjects, time-points, vertices
t, pval = scipy.stats.ttest_1samp(X, popmean = 0, alternative= "two-sided")
reject_fdr, pval_fdr = fdr_correction(pval, alpha=.05, method="negcorr")
My approach to plot dSPM values and highlight values above the 96th percentile:
brain = grand_avg_stc.plot(
"fsaverage",
hemi="split",
views="lateral",
subjects_dir=subjects_dir,
time_label=f'96th percentile approach for grand-average tact-vis',
background = (1,1,1),
clim=dict(kind="percent", pos_lims = (96, 97.5, 99.95))
)
…is it then enough to mask the dSPM plot with reject_fdr
even though its obtained for correction of p-values instead of “percentiles”?