Seeking Advice on SEEG Time-Frequency Analysis Workflow

Hi everyone,

I’m currently working on analyzing SEEG data, specifically comparing power time series in different conditions (e.g., positive vs. negative emotion) within a specific frequency band, such as high gamma. I’d like to get some advice on my analysis pipeline and the best practices for baseline correction and averaging methods. Here’s the current workflow I’m following:

1. Epoch Extraction and Baseline Correction

  • I first extract epochs and apply baseline correction by subtracting the mean of the baseline period (pre-stimulus) from the entire epoch.

2. Time-Frequency Computation and Baseline Correction

  • For each electrode and epoch, I compute the time-frequency representation, obtaining a 2D array of shape (n_freqs x n_timepoints).
  • I then perform baseline correction on the time-frequency data, but I’m unsure which method is most appropriate (currently using zlogratio). MNE offers several options such as mean, ratio, logratio, zscore, and zlogratio.
  • After baseline correction, I average power within the high gamma frequency range and then average across epochs.
  • Thus I get a n_electrodes * n_timepoints data for each condition.

However, I have come across an alternative approach:

  • Averaging epochs first within each condition, then computing the time-frequency representation, followed by baseline correction and averaging within the high gamma band.
  • These two approaches yield noticeably different results, and I’m uncertain which is more appropriate, nd what kinds of baseline correction methods I should use under the two approaches.

3. Electrode-Level Averaging and Further Baseline Considerations

  • Finally, I average the data across electrodes within the same condition (across subjects).
  • My concern here is whether I should perform another round of baseline correction before averaging electrodes. If I skip this step, I often observe that the baseline period in the final results deviates significantly from zero.

I would really appreciate any insights or recommendations on the following:

  • Which time-frequency baseline correction method most suitable for different pipelines/stages?
  • Is it preferable to perform time-frequency analysis on individual epochs first and then average, or should epochs be averaged before computing time-frequency representations?
  • Should baseline correction be reapplied after electrode-level averaging, or would this introduce bias?

Thanks in advance for any suggestions or shared experiences!

All the baseline correction method implemented in MNE have been used in publications and you can read the methods publications about the pros and cons of each. In the end, it’s a mostly empircal question that has been studied fairly well. I’d recommend Frontiers | Single-Trial Normalization for Event-Related Spectral Decomposition Reduces Sensitivity to Noisy Trials.

Time frequency is typically done before averaging.

I’m not sure what the motivation is for a second electrode-level baseline correction is, usually this variance would be accounted for in the statistics e.g. anova.

1 Like

Thank you, Alex!
Regarding electrode-level baseline correction, I’m concerned that different electrodes might have varying baselines. I’m not sure whether I should manually adjust for this.

Hello, I am also performing time-frequency analysis on EEG data, and I have noticed that even after baseline correction (regardless of the method used, such as percentage-based correction), the results still deviate from zero at baseline, especially in the gamma frequency band. Have you encountered this issue?

Hi,

If baseline correction boils down to computing an average value for the baseline period, then it’s normal that after subtracting it you still get fluctuations unless all time points in the baseline period have the same value.

Thank you! Should I use the average baseline of all trials for correction, or should I correct each trial with its own baseline and then average them? Which approach is more appropriate?
The second method, compared to the first, results in the baseline being closer to 0, but it seems to introduce greater variance.

I’d say usually baseline correction is done per trial. It depends on what you are trying to do though I guess.

Got it, thank you very much for your help.

Most of the studies I’ve read and the methods article recommended by Alex suggest above doing the correction at the trial level first and then averaging.

I applied trial-level baseline correction to individual subjects’ SEEG TFR (morlet for TFR) data first and then averaged them. Afterward, I averaged the electrode data from the same ROI across all subjects.

However, the results are strange anyway. When using zlogratio, the baseline is completely off from 0 (see below).

When using z-score or other methods like mean, the baseline looks normal, though the post-stimuli data still appears odd (see below example, the TFR frequency parameter is slightly different but the tendency is the same when using the previous frequency parameter). I’m not sure if this is normal.

Do you have any suggestions? Thank you!