Computing PSD with multitapers while rejecting bad segments on long Epochs

If I was to compute a PSD using multitapers on a Raw object, and that abnormal peak-to-peak amplitudes are annotated with BAD_, I could use psd_multitaper (or the new Spectrum class) in combination with reject_by_annotation=True.

However, I have long Epochs (because itā€™s way more convenient for this particular dataset). Each Raw recording contains:

  • 10 epochs of 16 seconds (condition 1)
  • 10 epochs of 8 seconds (condition 2)

And I would like to compute the PSD for each of those epochs individually. Letā€™s consider only one of those 2 conditions: the 10 epochs of 16 seconds.


Each of those epochs are quite long, and despite the preprocessing, they might include some abnormal artifacts in the form of large PTP amplitude. Thus, before computing the PSD, I thought I could cut them down into smaller e.g. 4 seconds sub-epochs, and reject the bad sub-epochs above a PTP threshold (without going into details, Iā€™m using autoreject to compute a global rejection threshold across a condition).

Now, 4 seconds is long, so if the sub-epochs do not overlap, I might end up rejecting 1/4 (or more) of the 16 seconds available before computing the PSD of this specific 16 seconds epoch.

Overlap it is, I donā€™t see a reason why I should make the overlap small, the only downside will be RAM and computation time. ā†’ Letā€™s consider that an epoch of 16 seconds is now cut down into sub-epochs of 4 seconds overlapping for 3.9 seconds. Now, a high-PTP event will drop less signal.

Question 1: Does this overlapping impact the computation of the PSD?


Assuming the answer to the previous question is negative, as Iā€™m thinking, then what about the edges? Overlapping sub-epochs means that the edges of the 16 seconds epoch are now far less represented in the object passed to psd_multitaper.

Question 2: Any thoughts on padding? Either zero-padding or reflective padding?


Am I overthinking this? :sweat_smile: Maybe running psd_multitaper directly on the 16 seconds epoch is fineā€¦ I donā€™t expect to drop many of the sub-epochs.

What happens afterward? Do you average the individual PSDs? If so, I think youā€™re effectively de-weighting the parts of the signal where less overlap happened (i.e., the first and last 0.1 s in your example will have lowest weight, and each 0.1 s inward will be more weighted, until you reach 4 s in from the edge; between 4 and 12 s will be equally, maximally weighted). All that is assuming no rejections.

I guess my questions for you would be these (youā€™ve probably already thought about this, but just in case):

  1. if you look at a spectrum for an epoch that has one of these large P2P artifacts, does it interfere with your analysis goals? (i.e., if the artifact mostly affects high freqs and you care about low freqs, maybe thereā€™s no need to exclude anything?)
  2. If you have the relevant BAD annotations alreadyā€¦ can you work with a list of raws (instead of an Epochs object) until after doing the spectral analysis? Then the exclusions will hopefully ā€œjust workā€ automatically
  3. what could we change about MNE-Python to have made this easy? Is it annotations on Epochs + a reject_by_annot param in the EpochsSpectrum constructor?

After I average the sub-epochs to get the PSD in the 16 seconds epoch. As you are saying, it induces a weighting of the signal. Which is why I was considering zero-padding or reflective-padding.

if you look at a spectrum for an epoch that has one of these large P2P artifacts, does it interfere with your analysis goals?

True enough, it will mostly impact high frequencies. But Iā€™m, also looking into relative power (how much power is expressed in one band compared to the totality, or compared to another), which is impacted by changes in the high-frequencies.

If you have the relevant BAD annotations alreadyā€¦ can you work with a list of raws (instead of an Epochs object) until after doing the spectral analysis?

I donā€™t, I wrote that to illustrate the ā€˜rejectionā€™. In my case, itā€™s simpler to work with Epochs, or I will end up cutting each recording into 20 Raw objects. It would be a pain to manage compared to a single Epoch object.

I could also go this way and use annotate_amplitude to create the BAD_ segments programmatically, that would get rid of the weighting issues created by the overlapping windows.

What could we change about MNE-Python to have made this easy?

Coding-wise, itā€™s fairly simple. I especially love the new Spectrum classes and the index/dict selection on them. It cut down my previous code by half. Conceptually, Iā€™m not sure if padding is a good idea for EEG data. If it is, then it could be a good addition.

1 Like