Hi,
I am trying to calculate the PSD using mne.time_frequency.psd_welch(), my code as follws:
#load
epochs_from_file = mne.read_epochs('sub-1-post-epo.fif',preload = False)
epochs.plot_psd(fmin=0.,fmax=35,average=True,bandwidth='hann')
epochs.plot_psd_topomap(bands=[(0,4,'Delta'),(4,8,'Theta'),(8,13,'Alpha'),(13,30,'Beta')],ch_type='eeg',normalize=False)
#PSD
picks = mne.pick_types(epochs_from_file.info,meg=False,eeg=True)
tmin = 0.
tmax = 240.# use the first 240s of data
fmin = 8.
fmax = 13.# look at frequencies between 8 and 13Hz
sfreq = epochs_from_file.info['sfreq']
sfreq
alpha_mean_psd_per_epoch = mne.time_frequency.psd_welch(epochs_from_file,picks = picks,n_fft = 1024,n_overlap = 0,n_per_seg = None,
tmin = tmin,tmax = tmax,
fmin = fmin,fmax = fmax,
average='mean',
window = 'hamming',
verbose = False)
print(alpha_mean_psd_per_epoch)
I encountered a problem, that is, I calculated the alpha energy of 150 segments under 62 channels, but the final result did not show the specific channel name. How do I correspond to the alpha energy under the specific channel?
Here is a more-or-less stream-of-consciousness response to reading your code. Hopefully this both gives you the information you need, and illustrates why we ask for minimal, reproducible code snippets when asking questions:
I’m surprised that this code even runs, when passing bandwidth='hann'. That shouldn’t work (and if it does it’s a bug).
Also where does the variable epochs come from? (the first line loads something called epochs_from_file, not epochs)… ah OK, I see that the two plotting lines are irrelevant to the question. So maybe the bandwidth='hann' does indeed fail, and those lines are leftover/cruft.
tmax=240 seems odd for epochs; are they really that long? This may not be doing what you want/expect.
OK, now I think I’m finally past the irrelevant distractions and seeing the real issue: psd_welch returns a tuple of (psds, freqs) as NumPy arrays; this is why in our documentation you typically see psds, freqs = psd_welch(...) (so the two return values are unpacked into separate variables). What you want is something like total power in the alpha band for a specific EEG channel (right?). Well, the psds array has shape “channels x freqs” and channels should be in the same order as your picks variable. Does that help?
I am grateful to you for pointing out these problems. There was a data set consists of five-minute EEG recordings, obtained from 10 volunteers. I first preprocessed and extracted epochs by 2s/epoch before calculating the PSD. Therefore, epochs_from_file represents the data after extracted epochs (about 150 epochs). The EEGs were recorded from 64 electrodes as per the international 10-20 system. I want to to calculate the average alpha band of the first 140 eopchs under each electrode , that is, 64 × 1. I’m very worried that I didn’t express the problem clearly. Could you have any suggestions for solving it?
so that you have variables psds, freqs, epochs, and picks. Here is an excerpt from the Returns section of psd_welch():
if input is of type Epochs, then psds will be of shape (n_epochs, n_channels, n_freqs).
So your task is to
figure out which frequency bins in freqs are part of the alpha band. There are a few ways to do this: np.searchsorted(alpha_low, freqs) and np.searchsorted(alpha_high, freqs, side='right') is one option; something like np.nonzero(np.logical_and(freqs >= alpha_low, freqs <= alpha_high)) is another option. Let’s call the resulting indices freq_mask
average across just the freqs you care about for just the epochs you care about. This should be something like psds[:140, :, freq_mask].mean(axis=(0, -1)). Let’s call the result alpha_per_chan
associate the resulting values with channel names. That’s easy; the order of alpha_per_chan should match the order of picks. If picks is integers (channel indices, not channel names) then you can get the names by epochs.ch_names[picks]
Thank you for your kind help! Your suggestions successfully solved the problems. I will be happy to calculatethe PSD further based on these helpful suggestions. Thank you very much!