I'm calculating MEG cortical labels power spectrum (for resting state data) in two different ways. Can you help me understand the differences? The power spectrums are quite different (see attached).
for label_ind, label in enumerate(labels):
stcs = mne.minimum_norm.compute_source_psd_epochs(epochs, ...)
for stc in stcs:
psds = np.mean(stc.data, axis=0)
I'm analyzing data from a patient with an ECOG. I want to compare the power-spectrum of the electrodes and the MEG cortical labels I've created around each electrode.
It seems that calculating the time series in the source space of long enough MEG epochs (~10s), split the electrodes file to same length epochs, and use
mne.time_frequency.psd_array_multitaper on both of them is the way to go, and also I know that both are in the same units (10*log10 for [dB]).
But I'm still a little bit confused by the different results I'm getting when using mne.minimum_norm.compute_source_psd_epochs instead.
It should be more in agreement.
Can you share a script on one of the MNE datasets to figure out the
cause of the difference?
Also note that both units in your plots are very different (dB vs I am not sure)
One thing the pops immediately, is that only on the second approach (psd_array_multitaper on the label_ts) you need to set the mode (I set it to mean_flip)
Also, for both of them, I use 10 * np.log10(x) to get dB. Not sure this correct in the first approach, mostly because it's not part of the mne example.