Frequency band specific source reconstruction and instantaneous phases

Hi all,

I came across @britta-wstnr’s hilbert beamformer method. When I read through the blogspot documentation, I realized that the inverse operators were computed for each narrow band, followed by applying the resulting filters to the analytic signal. I was wondering if it wouldn’t make more sense to

  1. Compute the inverse operators on the broadband of interest (e.g. 1-40 Hz), because it uses more information of the original signal to reconstruct the sources and doesn’t distort the waveforms.
  2. Apply a narrowband filter (e.g. 5-15 Hz and 30-40 Hz) on the reconstructed source time series.
  3. Use the analytic signal obtained by the hilbert transform to extract instantaneous phases.

I found that @britta-wstnr actually did calculate the analytic signal after source reconstruction in their more recent article, which agrees with the approach I thought would be reasonable. However, the inverse operators are still estimated on narrowband filtered sensor space signals. I believe this issue of narrowband filtering before or after source localization was inconclusively discussed in this very old post. I would agree with some of the answers to this specific post that narrowband filtering before the source reconstruction would severely distort the signal and thus result in very different source reconstructions. Is there an updated/newer consensus on what approach is better? I would appreciate if @britta-wstnr could comment on why they chose to narrowband filter before source reconstruction.

Best wishes,
Dominik

2 Likes

I think that the narrowband-filtering first approach is done to work around the issue of filtering over signal discontinuities in epoched data. Usually, by the time you are computing the covariance matrix & inverse solution, your data are already epoched.

So, if you narrowband filter the source estimates, and those data are epoched, then you risk creating filter artifacts when you filter over signal discontinuities (i.e. the edges of your epochs).

I know there are some filter designs that apply reflection to try to minimize this, but I myself have not delved into this subject matter.

2 Likes

Hi Scott, I appreciate this practical point of view. I haven’t actually thought about this since I’m working with non-epoched data but it makes sense. If I would work with epoched data, couldn’t I simply compute the noise and data covariances on the epoched data, then conduct the source reconstruction on the continuous raw data, narrowband filter, and finally re-epoch the data in source space? This approach would avoid narrowband filtering artifacts affecting the source reconstruction and it would also avoid the estimation and application of more than one inverse operator. On the other hand, it might still increase the computational burden because of applying the inverse operator to inter-trial intervals. I could see how the benefits might outweigh the disadvantages.
Going back to non-epoched data, if I compute the noise and data covariances on empty room and continuous data, would there be a reason to follow @britta-wstnr’s approach? I’m still suspecting that I am overlooking something here.

Thanks for your help!

In short, I honestly don’t know. I’ve never tried estimating a noise covariance on epoched data and applying it to the raw continuous data, and for that matter I’ve never tried conducting source reconstruction on raw continuous data. I just had a look through the MNE tutorials and I couldn’t find an example there that used raw continuous data, either.

I assume even with pristine data, you might have signal discontinuities in the middle of your raw continuous data (break periods during or between tasks, or time periods that were removed due to artifact), or you might have periods in the raw continuous data that are heavily noise-contaminated (EMG, etc) that you’d want to remove. I don’t know what effect signal discontinuities that are in the middle of your data, and/or noisy periods would have on the inverse solution, if any.

In theory, I guess not. But again I’ve never computed noise covariances on raw continuous data, so the same questions in the point above would apply here, as in I’m not sure how they would affect the covariance estimations.

Sorry that I can’t be of more help. I’m unfortunately not familiar enough with the math of it all to make an informed guess, so I’ve left you with more questions than answers :wink:

2 Likes

@Dominik - the posts that you mention discuss MNE inverse solutions and their argument for not prefiltering before doing the MNE inverse is the normality expectation. If you filter too heavily, you can potentially violate this assumption of the inverse model. I don’t think that DICS or LCMV has this limitation, so you can narrowband filter the data before creating your spatial filter.

Regarding the effect of narrowband filtering on your data - this paper addresses some of that: M.J. Brookes et al. / NeuroImage 39 (2008) 1788–1802 . With narrowband filtering, you might need more data to stabilize your covariance matrix because of the “useful samples” being reduced by the bandpass. The paper states that wideband filtering gives you a more accurate localization.

Handwavy/potentially erroneous explanation ahead: The beamformer filter is adapted based on the covariance matrix. If you use wideband filtering, your signals will be dominated by lower frequency data because of the 1/f brain power and your covariance matrix will represent more of the low frequency signals. So your beamformer may tune itself to localize the low frequency activity (even if wideband) and post-filtering after localization may mislocalize or add noise the high frequency activity. So if you pre-filter before creating your beamformer - your covariance matrix will better reflect the data you are trying to localize. And your beamformer may reject more noise and be more accurate. It would be great if others could confirm this, but I believe this is true.

-Jeff

4 Likes

Hi @jstout211, thanks for your thoughts. Concerning the discussion I referenced, I thought that narrowband filtering could induce spurious temporal correlations, which I believe do matter for the LCMV beamformer. Aren’t correlated sources suppressed with this method?

Awesome, thanks for sharing this study, it seems to confirm what I suspected about using more information / “useful samples”.

Great, your “handwavy” explanation makes sense. I guess I wrongly assumed that the different frequencies would be somewhat independently projected to the source space. This does not seem to be the case and Dalal et al. (2008) actually explain and show that low-frequency content biases the beamformer spatial filters in broadband or unfiltered signals. I guess this would indeed be the answer to why @britta-wstnr narrowband filtered the signals at the sensor level for the Hilbert beamformer method.

On first sight the two referenced studies seem to contradict each other but the one by Brookes et al. (2008) concludes:

In general, the higher the bandwidth, the more accurate the final result will be. However, in many cases large bandwidths are undesirable, since they lead to increased levels of random noise in the signal, and therefore, from this point of view, bandwidths should be kept to a minimum in order to maximise the signal to noise ratio.

I guess in the end it’s a trade-off between the bias towards high-power components of the signal and the amount of information used for source reconstruction. Additionally, the band should be as narrow as possible for the Hilbert transform to be well-behaved.

Thanks everyone for the discussion, I believe this sufficiently addresses my concerns.

2 Likes

Hello!

Sorry for being so very late to this beamformer party - I was on vacation and am only now catching up.

Indeed, the reason to filter pre beamforming is to gear the filter towards the frequency band of interest. Note, that we used this approach for rather broad bands of frequency (usually 15-20 Hz bandwidth). This makes a lot of sense for high frequency activity (above around 45 Hz) - but probably not so much when looking at low frequency stuff, e.g. alpha.

Also note: we have not thought about practicalities such as combining sensor types yet - pre-whitening might pose problems with the approach!

Hope this helps,
Britta

3 Likes