Explained variance in EEG source space (template fMRI)

I was following the instructions for building a forward operator using a template fMRI (EEG forward operator with a template MRI — MNE 0.23.dev0 documentation).

I am currently investigating an experimental condition of a single subject across 4 sessions. When I looked at the Variance explained values, I noticed that there are strong fluctuations there. On 2 out of 4 sessions it is ~2% and on the other two sessions it is ~60%. What could have gone wrong here? What parameters could I adjust to get better results?

I used the following code for this:

# Download fsaverage files
fs_dir = fetch_fsaverage(verbose=True)
subjects_dir = op.dirname(fs_dir)

# The files live in:
subject = 'fsaverage'
trans = 'fsaverage'  # MNE has a built-in fsaverage transformation
src = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif')
bem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif')

# after epoching
fwd = mne.make_forward_solution(raw_1020.info, trans=trans, src=src, bem=bem, eeg=True, mindist=5.0, n_jobs=1)
cov = mne.compute_covariance(epochs, method='auto')  
inv = mne.minimum_norm.make_inverse_operator(raw_1020.info, fwd, cov, loose=0.2, depth=0.8)
snr = 2
stc, res, var_exp = mne.minimum_norm.apply_inverse(evoked, inv, lambda2=1. / snr ** 2,pick_ori=None, method='MNE', return_residual=True)  # returns variance explained
                    

Platform: Windows-10-10.0.18362-SP0
mne: 0.22.dev0

Hi Nils, I would suspect there’s a problem with the regularization here. Ensure to calculate the covariance based on the noise of your data, i.e. you would want to use the pre-stimulus baseline only:

cov = mne.compute_covariance(epochs, tmax=0, method='auto')  

btw, MNE-Python 0.22 has meanwhile been released, I’d suggest that you update if possible

pip install -U mne

Best wishes,

–Richard

Hi Richard, I tried that one (with different amounts of pre-stimulus baseline), but it seems to have no big influence on the outcome. Maybe some few % up or down.

cov = mne.compute_covariance(epochs, tmax=0, method='auto')

Thanks for hinting at the update.

Not sure if that makes a difference here, but I suggest you use the same info object everywhere:

info = evoked.info
fwd = make_forward_solution(info, ...)
inv = make_inverse_operator(info, ...)

Have you had a look at the diagnostic plots of the covariance? Could you share the SVD plots for each session?

https://mne.tools/stable/generated/mne.viz.plot_cov.html

This is the output for the four different days. The biggest difference between good sessions (2, 3) and bad sessions (1, 4) is the drop in noise levels at higher Eigenvalues.

For sessions 1 and 4, the determined rank seems to be incorrect. The dashed red vertical line should appear just where the signal starts to drop. The “step” we’re seeing in these two plots is also sort of unusual. Did you process all sessions exactly the same? Are different electrodes marked as bad in each session? Did you run ICA and removed different numbers of components per session?

(All of this would be fine, just trying to narrow down why the ranks are different. The next step, then, would be to try and get MNE to use the respectively correct ranks in its calculations)

Except for manual trial rejections all sessions are preprocessed with the same script. The number of electrodes should be 61 in all cases.

We performed ICA, but I don’t know whether the number of components changed per session. But I think that’s very likely.

Ok, good to know! Is the data referenced to average?

Could you try running, for each session

cov = mne.compute_covariance(epochs, method='auto', rank='info')

and re-create the covariance plots? Does anything change?

No, the plots still look the same.

Yes, all were re-referenced to average.

Ok, time for some harsher methods.

We leave out sessions 2 and 3 for now, as things appear to be working for them already. We focus on 1 and 4.

compute_covariance allows for the explicit specification of a rank. Could you try, maybe starting with session 4 (where the plot is much clearer IMHO), to find a rank value that would lead to the vertical dashed line in the SVD plot to be placed just at the tip of the graph, right before it starts to drop off?

i.e., what you need to do is something like:

rank = {'eeg': 56}  # fix rank to 56
cov = mne.compute_covariance(epochs, method='auto', rank=rank)
mne.viz.plot_cov(...)

and keep adjusting the rank value until the SVD plot looks correct. And once you’ve gotten there, continue running your source analysis for this session, and see if it changes anything.

Indeed, the Explained Variance changes for the adjusted rank. For Session 1 and 4, 56 seems to be a good value. Any idea how to automatize this step?

Great! So the explained variance is in a reasonable range now?

Empirical rank estimation is extremely tricky, as it involves thresholding. You can use mne.compute_rank() to manually control the thresholding parameters, specifically, tol and tol_kind.

In any case, it remains imperative to always check the SVD plots to ensure the estimated rank actually matches the respective data.

Roughly the same range as for the better sessions.

I’ll give that a try, thanks.

Btw: I saw that 56 would also give stable good results for session 2 and 3. Would it be appropriate to use a rank that is lower than needed if the explained variance remains unchanged? Otherwise, one could possibly also use a minimum over all sessions.

I suspect that you would be at the risk of losing a little bit if information if you assume a rank that is lower than the actual rank of the data; however, contrary to using a rank that is too high, it won’t totally break your analysis. @larsoner is more knowledgeable on this, so I’d rely on his judgment here :innocent:

Yes by setting rank to low for some data you throw away the lowest-variance components. By setting rank too high you include some components that are numerically zero and it can blow up your result.

In your case if you’re going to use different covariances for each day, I would use different (correct) rank values for each day as well.

1 Like