Sorry for coming late, and hi Rezvan,
hi,
> This is probably a dummy question but to assemble the inverse operator in
> the older versions of mne_python, one had to first compute the noise
> covariance then regularize it manually by assigning mag grad and eeg
> regularisation factors. the defaults are 0.1 here.
correct. You can still do this. Although when working on this with
Denis we realized
that the 0.1 value is typically too high. This leads to smaller
whitened data and for example weaker dSPM/sLORETA values.
> At the moment the automated paradigm computes the noise covariance and
> regularizes in the same framework by cross-validation. the defaults of
> regularisation factors for empirical approach and fixed diagonal are much
> less than the old ones (0.01, 0.01 and 0.0 for grad, mag and eeg
> respectively).
when doing cross-validation the parameters are optimized for your data.
you can use the evoked.plot_white method to check your whitening quality.
The whole idea is that here we cannot know in advance what is the best. You
improve your chance of getting closer to the optimal solution by using
different regularization strategies. I would recommend you to also use
'shrunk', it will in fact learn use a grid search to determine the best
regularization values and then is compared against other solutions. So far
with 'diagonal_fixed' no tuning is involved, same for 'empricial'. A
minimal and fast way of optimizing reguralization would be ['empricial',
'diagonal_fixed', 'shrunk'] where you pass your guess for diagonal_fixed to
the method_params, e.g.
method_params=dict(diagonal_fixed=dict(grad=0.01, mag=0.01, eeg=0.0))
This way you can compare no regularization, your guess and the grid search.
The function will return you the covariance estimator that fits best unseen
data. If you set `return_estimators` to True you will get a list ordered by
fit. You can then pass this to evoked.plot_white and inspect model
violations for your estimators as Alex pointed out.
> I had a look at the new cov.py code to see if can see some scalings that
> could explain why the new regularisation uses smaller defaults but seems
> that both the old and new versions use the same "regularize" function for
> empirical and fixed diagonal.
>
> I was wondering if you could please let me know where am I making a
mistake?
no mistake
> and if I'm not making a mistake then why the regularisation factors are
so
> different in the old and new versions?
I hope it's clearer now. These defaults have never been very informed, they
are just-so guesses that yield ok results on data we have seen. They are
not authoritative by any means.
Cheers,
Denis
check with evoked.plot_white the quality of your noise covariances
(old and new ones).
you can share the plots if you need feedback.
HTH
Alex
_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page
The information in this e-mail is intended only for the person to whom it
is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you in
error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20151210/5440e023/attachment.html