Understanding changes in source reconstruction based on the data used for covariance

Hi everyone,

I’m performing source reconstruction on my EEG data (eLORETA), but I’ve noticed that the results vary considerably depending on how I calculate the covariance from baseline data. I’m struggling to decide which approach is “correct,” so I’d appreciate any guidance.

Study design:

  • I acquired 3 minutes of baseline EEG before stimulation and another 3 minutes of “baseline” after a waiting period post-stimulation.

  • I have 5 different stimulation conditions. For each condition, I apply dual-site tACS (1 min), immediately followed by 1 min of EEG recording during resting state. This stimulation + recording is repeated 6 times per condition. But we only analysed the EEG data after stimulation (because of the noise created by the dual-site tACS stimulation).

Preprocessing:

  • The EEG resting state data (after stimulation) and the resting state data has been preprocessed and cleaned together.

  • At the electrode level, topoplots look as expected (we do not expect changes in alpha power due to the stimulation, so alpha power should look as it does during resting state): alpha power is primarily occipital/parietal, consistent with resting-state EEG.

Issue:

  • Source reconstruction results are inconsistent across participants, also if topoplots (electrode level) are correct.

  • The results also depend heavily on the method used to compute the covariance:

Covariance approaches I tried:

  1. Using all 30 min of baseline (all conditions combined) with epochs of 3 min.

  2. Same 30 min, but epochs of 1 min.

  3. Using all 30 min as a single epoch.

  4. Using only the 6 min baseline per condition with epochs of 3 minutes.

  5. Using only the first 3 min of baseline per condition, for a total of 15 minutes with epochs of 3 minutes.

All methods produce substantially different source reconstructions for the same data.

Source reconstruction parameters:

  • I applied source reconstruction either on the 1-min post-stimulation epochs or subdivided them into 20s segments and then calculated the source reconstruction.

  • Sampling rate: 500 Hz.

  • Results differ depending on whether I use 20s or 1-min epochs, even for the same data.

I have added two figures showing the differences between two covariance calculation methods for the same participant. The figures show the avarage power in alpha over the 6 minutes that I have (depending on 1 minutes epochs or 20s epochs).

My questions:

  1. Why would the choice of baseline epoch length for the calculation of the covariances affect source reconstruction so much?

  2. Is there a recommended approach for calculating the covariance (size?) when performing source reconstruction with MNE in this type of design?

  3. Does it make sense to adjust the source reconstruction epoch length independently from the baseline epoch length? And why does it change depending if we calculate the source reconstruction for each 20s or for each 1 minute, when averaging all the time-points?

Any insights, suggestions, or references would be greatly appreciated.

This figure shows the average over time of the alpha power for one participant when using the 1 min epochs of the baseline for the covariance (with a total of 30minutes) and calculating the source reconstruction for each 1 minute epoch.

This figure shows the average over time of the alpha power for one participant when using the 3-minute epochs of the baseline for the covariance (with a total of 30minutes) and calculating the source reconstruction for each 20s epoch.