bad channel detection before tsss

Hello!
The MNE implementation of SSS / Maxwell filtering provides, among other things, basic bad channel detection (“find_bad_channels_maxwell()”).

in the MNE tutorial for SSS and Maxwell they specifically warn that it is critical to mark bad channels in raw.info[‘bads’] before calling maxwell_filtering(). Aligned with their warning, they run find_bad_channels_maxwell() and mark bad channels in raw.info[‘bads’] before calling maxwell_filtering().

My question: How does the “find_bad_channels_maxwel” algorithm work? how does it detect bad channels?

Thank you!

did you read the Notes section of the function’s docstring? It says:

This algorithm, for a given chunk of data:

  1. Runs SSS on the data, without removing external components.
  2. Excludes channels as flat that have had low variability (standard deviation < 0.01 fT or fT/cm in a 30 ms window) in the given or any previous chunk.
  3. For each channel k, computes the range or peak-to-peak d_k of the difference between the reconstructed and original data.
  4. Computes the average mu_d and standard deviation sigma_d of the differences (after scaling magnetometer data to roughly match the scale of the gradiometer data using mag_scale).
  5. Marks channels as bad for the chunk when d_k > mu_d + limit * sigma_d. Note that this expression can be easily transformed into (d_k - mu_d)/sigma_d > limit, which is equivalent to z(d_k) > limit, with z(d_k) being the standard or z-score of the difference.

Data are processed in chunks of the given duration, and channels that are bad for at least min_count chunks are returned.

1 Like

thank you!