Dear MNE Users,
I am doing source localization with resting state EEG, and have noticed the source time courses in my data often show a few very large peaks. Below is an example of a source estimate and its root mean square (RMS) time course, with the red horizontal line indicating a threshold of 18 median absolute deviations (MAD). In the zoomed-in panels you can see that peaks normally last ~200-650 ms. These source space peak latencies sometimes align with small residual artifacts in the sensor space. I’m wondering if these peaks could be due to residual artifacts that are being amplified in the source space, or if there are other issues I should be considering.
I’ve tried varying the lambda parameter value and the distributed source method I used (eLORETA, sLORETA, dSPM, MNE) to help rule out inadequate regularization or method-specific issues, and the peaks look the same. On a vertex level, a large proportion (or all) vertices are usually involved in creating these peaks.
Have others working with resting state EEG run into this issue? If these peaks in the source space are due to residual eye blinks etc., I’m wondering why there would only be a few peaks per file and why all vertices tend to be affected.
Thank you in advance for your help,
Isabella
