Shrinkage applied in make_lcmv and make_dics

I have very good experiences applying make_lcmv amd make_dics to MEGIN data after using SSS and ICA (resulting in a rank 69). The algorithms provide stable and interpretable results.

Thats for providing those - I do however have a question on what exactly is done in those algorithms when handling the low ranked cov matrices.

I understand that regularization by diagnonal loading is not sufficient given that large rank deficiency - and that shrinkage is applied by the algorithms? (as suggested by Engemann and Gramfort 2015).

My question is: how it the large rank defiency taken into account prior to inversion? If shrinkage is applied; which procedure is then applied?

Any help on this would be most appreciated!


filters = make_dics(, fwd, csd_common.mean() , noise_csd=csd_noise.mean(),
reg=0.05, pick_ori=‘max-power’, reduce_rank=True, real_filter=True, rank=69, depth = 0)

hi @olejen

did you have a look at this recent paper:

we tried to discuss there such issues and solutions including what MNE does.

my way of explaining is that we use non-square withening matrices so the beamformer
is effectively applied in a virtual channel space of size 69.

let me know if you have more questions

cc @britta-wstnr @wmvanvliet


Hi Alex,

Thanks for your quick response. Yes, I was reading Britta’s paper with great interest but was confused as Tikhonov regularization, shrinkage as well as truncating the pseudo-inverse are mentioned.

I now understand it is truncating the pseudo-inverse which really solves the problem of the rank deficiency introduced by SSS. In fact, in my experience the additional regularization does not do much when working on the SSS data (I suspect since the eigenvalues associated with the 69 components already are high so the diagonal loading does not change much). As such, it might makes sense to skip the regularization (reg = 0) when SSS/truncation is used for sake of simplicity?

I think this is quite an important issue as the Jaiswal 2020 Neuroimage paper was the result of workshop established to address how to do beamforming on rank deficient data after SSS. As I read it the paper does not make the truncated pseudoinverse explicit (albeit from the MNE code examples I can see that its done).

I will clarify and document the issue of the truncated pseudoinverse in our pipeline.

Thanks again for your help and efforts!


Hi Ole!

Sorry for chiming in so late, I was on my end-of-the-year leave. Indeed, in the beamformer code the covariance matrix is truncated by default.
Regarding your thought on regularization: I anecdotally made good experiences with setting the regularization to zero if the truncation is being done (also when choosing this option in FieldTrip, for example) so I agree that this can make sense (and could for example also prevent further resolution decrease in the output!).

All the best,

Hi Britta,
Thanks a lot - most useful! I wonder exactly when the truncation is done? - i.e. at which stage in the algorithms? E.g. after prewhitening - or as part of the prewhitening?
All the best,

Hi Ole,

the truncation is done as part of the inversion of the covariance matrix, so after pre-whitening.
I forgot to mention in my last reply that the truncation behavior relies on and can be controlled via the rank parameter in the beamformer functions (e.g., make_lcmv). By default, the rank is inferred from the info object passed to the function.

Hope this helps!

Hi Britta,

I looked into what Fieldtrip does for the truncated inverse:

[U,S,V] = svd(X,0);
S = diag(1./s(1:kappa));
Y = V(:,1:kappa)SU(:,1:kappa)’;

where kappa is the rank. I take MNE Python does the same?

In MNE Python I found it better to use mne.compute_rank rather than using ‘info’ (as info does not take the rank reduction from ICA into account.

Here is what we converged on FLUX/DICS.ipynb at main · Neuronal-Oscillations/FLUX · GitHub

Thanks for your input - much appreciated!