compute_covariance

Hi,
This is probably a dummy question but to assemble the inverse operator in
the older versions of mne_python, one had to first compute the noise
covariance then regularize it manually by assigning mag grad and eeg
regularisation factors. the defaults are 0.1 here
<http://martinos.org/mne/stable/generated/mne.cov.regularize.html?highlight=regularize#mne.cov.regularize>
.
At the moment the automated paradigm computes the noise covariance and
regularizes in the same framework by cross-validation. the defaults of
regularisation factors for empirical approach and fixed diagonal are much
less than the old ones (0.01, 0.01 and 0.0 for grad, mag and eeg
respectively).

I had a look at the new cov.py code to see if can see some scalings that
could explain why the new regularisation uses smaller defaults but seems
that both the old and new versions use the same "regularize" function for
empirical and fixed diagonal.

I was wondering if you could please let me know where am I making a
mistake? and if I'm not making a mistake then why the regularisation
factors are so different in the old and new versions?

Many thanks,
Rezvan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20151209/89296bdf/attachment.html

hi,

This is probably a dummy question but to assemble the inverse operator in
the older versions of mne_python, one had to first compute the noise
covariance then regularize it manually by assigning mag grad and eeg
regularisation factors. the defaults are 0.1 here.

correct. You can still do this. Although when working on this with
Denis we realized
that the 0.1 value is typically too high. This leads to smaller
whitened data and for example weaker dSPM/sLORETA values.

At the moment the automated paradigm computes the noise covariance and
regularizes in the same framework by cross-validation. the defaults of
regularisation factors for empirical approach and fixed diagonal are much
less than the old ones (0.01, 0.01 and 0.0 for grad, mag and eeg
respectively).

when doing cross-validation the parameters are optimized for your data.

you can use the evoked.plot_white method to check your whitening quality.

I had a look at the new cov.py code to see if can see some scalings that
could explain why the new regularisation uses smaller defaults but seems
that both the old and new versions use the same "regularize" function for
empirical and fixed diagonal.

I was wondering if you could please let me know where am I making a mistake?
and if I'm not making a mistake then why the regularisation factors are so
different in the old and new versions?

check with evoked.plot_white the quality of your noise covariances
(old and new ones).

you can share the plots if you need feedback.

HTH
Alex

Sorry for coming late, and hi Rezvan,

hi,

> This is probably a dummy question but to assemble the inverse operator in
> the older versions of mne_python, one had to first compute the noise
> covariance then regularize it manually by assigning mag grad and eeg
> regularisation factors. the defaults are 0.1 here.

correct. You can still do this. Although when working on this with
Denis we realized
that the 0.1 value is typically too high. This leads to smaller
whitened data and for example weaker dSPM/sLORETA values.

> At the moment the automated paradigm computes the noise covariance and
> regularizes in the same framework by cross-validation. the defaults of
> regularisation factors for empirical approach and fixed diagonal are much
> less than the old ones (0.01, 0.01 and 0.0 for grad, mag and eeg
> respectively).

when doing cross-validation the parameters are optimized for your data.

you can use the evoked.plot_white method to check your whitening quality.

The whole idea is that here we cannot know in advance what is the best. You
improve your chance of getting closer to the optimal solution by using
different regularization strategies. I would recommend you to also use
'shrunk', it will in fact learn use a grid search to determine the best
regularization values and then is compared against other solutions. So far
with 'diagonal_fixed' no tuning is involved, same for 'empricial'. A
minimal and fast way of optimizing reguralization would be ['empricial',
'diagonal_fixed', 'shrunk'] where you pass your guess for diagonal_fixed to
the method_params, e.g.

method_params=dict(diagonal_fixed=dict(grad=0.01, mag=0.01, eeg=0.0))

This way you can compare no regularization, your guess and the grid search.
The function will return you the covariance estimator that fits best unseen
data. If you set `return_estimators` to True you will get a list ordered by
fit. You can then pass this to evoked.plot_white and inspect model
violations for your estimators as Alex pointed out.

> I had a look at the new cov.py code to see if can see some scalings that
> could explain why the new regularisation uses smaller defaults but seems
> that both the old and new versions use the same "regularize" function for
> empirical and fixed diagonal.
>
> I was wondering if you could please let me know where am I making a
mistake?

no mistake

> and if I'm not making a mistake then why the regularisation factors are
so
> different in the old and new versions?

I hope it's clearer now. These defaults have never been very informed, they
are just-so guesses that yield ok results on data we have seen. They are
not authoritative by any means.

Cheers,
Denis

check with evoked.plot_white the quality of your noise covariances
(old and new ones).

you can share the plots if you need feedback.

HTH
Alex
_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

The information in this e-mail is intended only for the person to whom it
is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you in
error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20151210/5440e023/attachment.html

Thanks a lot Alex and Denis, very helpful.
Denis, yes, I was using the three options as you suggested but then it
occurred to me that my old guessed regularisation is somehow performing
better for my data (I'm using plain MNE for source estimation), then
checked the code and saw that the defaults for fixed diagonal is different
and thought this might be the reason.
I guess smaller values are usually better but will check the whitened data
and returned estimators to see why it may not be the case for my data.

Many thanks,
Rezvan

Hi Rezvan, I'm not sure I brought across the point entirely.

By using the `method_params` in the case of diagonal_fixed you can use your
preferred regularization parameters-- see little example in my email-- and
skip the defaults. It would be equivalent to calling the regularize
function with the old parameters, just that you can expose the resulting
covariance estimation to quantitative comparisons.
If you then use other estimators by passing e.g. method=['empirical',
'diagonal_fixed', 'shrunk'] a cross validation will tell you which one best
captures the statistics of new data. As the negative log-likelihood of the
covariance is used for that it is a direct quantitative comparison that in
principle should receive more respect than any visual comparison. It may
then turn out that your preferred guess is ok on some subjects but not for
others.

I hope it's getting clearer,
Cheers, Denis

Sorry Denis, my email was unclear, your point was clear and I got it :slight_smile:
I meant before you add the new regularisation scheme to the mne python I
had used regularisation parameters of 0.1, then you introduced the new
method and I used it with the new defaults. And noticed the old defaults
were probably working better, but thought to email first to make sure that
you have not used some sort of scaling in the new function that I have
missed and would make using regularisation factors of around 0.1 with the
new function inappropriate. Your and Alex's explanations indeed made things
clear, thanks a lot.

Ok I believe you :slight_smile:
Just to be safe use the comparison functions that you get for free. You can
then compare your guess against nothing VS auto-reg.
Happy MNE-ing.

Denis