covariance rank and maxfiltering

Hi all,
I’m trying to work with neuromag data that has previously been processed with maxfilter outside of mne. I have ssp projectors created for EOG and ECG artifacts that are also applied (there are 4 projectors for each).

Covariance is calculated something like

noiseCov = compute_covariance(epochsList, tmin=None, tmax=0, method=‘shrunk’,return_estimators=False, verbose=True, n_jobs=1,projs=projList)

When I plot covariance, I get images similar to this with a much higher rank than expected:


If I do

sss_rank = mne.compute_rank(raw.pick(meg_picks), rank = ‘info’)
Computing rank from data with rank=‘info’

I get

MEG: rank 61 after 8 projectors applied to 306 channels

What is the recommended way to deal with this rank issue? I have reviewed some tutorials and these talk notes, but I thought I’d check in here.

Thanks for any help,
Megan

Hello, for MaxFiltered data, rank=‘info’ is usually a good choice.

Empirical rank estimation works with (more or less) arbitrary cut-offs (basically, needs to decide when an eigenvalue is large enough to be considered meaningful). If that cut-off is is not well-chosen, your rank estimate will be incorrect.

Since MaxFilter adds information about the dimensionality reduction to the info object, you can rely on the values in info and don’t need to use empirical rank estimation.

Hence,

seems reasonable to me, and also matches what we see in the figures you shared (the traces drop off at eigenvalue index of about 60). We can also see that the graphs for gradiometers and magnetometers behave in a very similar fashion — this is what we’d expect after MaxFilter. The rank estimates displayed in the upper right corners, however, are not correct — here, we see the issue with empirical rank estimation in action.

Best wishes,
Richard

2 Likes

Hi Richard,
Thanks for your help. I used this:

noiseCov = mne.compute_covariance(epochsList, rank = ‘info’, tmin=None, tmax=0, method=‘shrunk’,return_estimators=False, verbose=True, n_jobs=1,projs=projList)

My next step for looking at this is the whitened evoked and GFP. But with the new rank, the whitened GFP looks “weird.”

Original:

Rank=‘info’:

Per this old thread, here is the info for my evoked data (it is bandpass filtered):

In [42]: evokeds[‘central’].info
Out[42]:
<Info | 26 non-empty values
acq_pars: ACQch001 110113 ACQch002 110112 ACQch003 110111 ACQch004 110122 …
bads: 9 items (MEG2032, MEG2323, MEG2343, MEG2312, MEG1632, MEG2341, …)
ch_names: MEG0113, MEG0112, MEG0111, MEG0122, MEG0123, MEG0121, MEG0132, …
chs: 204 Gradiometers, 102 Magnetometers
custom_ref_applied: False
description: Vectorview system at MRN
dev_head_t: MEG device → head transform
dig: 90 items (3 Cardinal, 4 HPI, 83 Extra)
events: 1 item (list)
experimenter: MEG (meg)
file_id: 4 items (dict)
highpass: 1.0 Hz
hpi_meas: 1 item (list)
hpi_results: 1 item (list)
hpi_subsystem: 2 items (dict)
line_freq: 60.0
lowpass: 30.0 Hz
maxshield: False
meas_date: 2019-06-06 13:27:32 UTC
meas_id: 4 items (dict)
nchan: 306
proc_history: 1 item (list)
proj_id: 1 item (ndarray)
proj_name: dquinn_navigate
projs: planar-998–0.500-0.500-PCA-01: on, planar-998-- …
sfreq: 1000.0 Hz
subject_info: 6 items (dict)

Anything else I should check to make sure this covariance is ok, or to figure out why Whitened GFP looks so high?

Thank you,
Megan

The results don’t look great but also not horrible. The baseline activity is irritating though. The high GFP after time point zero is fine.

  • How many epochs do you have?

  • Were they baseline-corrected?

  • Why do you pass projs? Are the projectors not part of your epochs?

Thanks for your response.

  • How many epochs?
    179 in the evoked condition shown for whitening. There are 487 epochs from all the conditions in the whole raw data file, and this epochs list is what I used to create the covariance. Is that correct? Should I instead create covariance for each condition?

  • Were they baseline-corrected?
    Yes, but when I read the epochs object after saving, it doesn’t show this. Could this be a problem? The epochs were created long ago by an earlier version of mne and I have recently been reading them to work on source analysis. Any way to correct or check this?

I use the following for epochs creation:

BL =  (-.1, 0 )
epochs[str(key) + '_epochs'] = mne.Epochs(raw, events, new_id, tmin, tmax, proj=True, picks=picks, baseline=BL, preload=True, reject=dict(grad=7000e-13, mag=7e-12))

When I read in the epochs, I see:

Read a total of 8 projection items:
planar-998–0.500-0.500-PCA-01 (1 x 204) active
planar-998–0.500-0.500-PCA-02 (1 x 204) active
axial-998–0.500-0.500-PCA-01 (1 x 102) active
axial-998–0.500-0.500-PCA-02 (1 x 102) active
planar-999–0.100-0.550-PCA-01 (1 x 204) active
planar-999–0.100-0.550-PCA-02 (1 x 204) active
axial-999–0.100-0.550-PCA-01 (1 x 102) active
axial-999–0.100-0.550-PCA-02 (1 x 102) active
Found the data of interest:
t = -100.00 … 700.00 ms
0 CTF compensation matrices available
0 bad epochs dropped
Not setting metadata
179 matching events found
No baseline correction applied
Created an SSP operator (subspace dimension = 8)
8 projection items activated

When I read in the evokeds, I see this message at the bottom:

Projections have already been applied. Setting proj attribute to True.
No baseline correction applied

  • Why do you pass projs? Are the projectors not part of your epochs?
    It appears the projectors are part of the epochs. I suppose it wasn’t clear to me how projectors would be handled reading in epochs. I can remove this from the covariance creation.

Thanks again,
Megan

You should use all of them to calculate the noise covariance matrix. The more data, the better.

I believe we didn’t save that information in older versions of MNE. But to be absolutely sure, I’d suggest to simply apply baseline-correction again after loading the data.

In that case, I’d simplify your call to compute_covariance() to something like:

compute_covariance(epochsList, rank='info', tmin=None, tmax=0)

(btw I’m not sure why your epochs are called epochsList, it’s certainly an epochs object and not a list of epochs :thinking:)

Richard

Thanks again for your response,

Sorry this may be a novice question - how would I do this? Do you mean I should redo epoch creation from the raw object?

I wasn’t very clear in explaining my objects. For brevity and to compare with the single condition evoked object from the whitening image, I showed only one of the epochs objects. But in my scripts, I have both epochs objects and an epochsList. The epochsList, which I used for the covariance, is a list and looks like this:

In [10]: epochsList
Out[10]:
[<EpochsFIF | 35 events (all good), -0.1 - 0.7 sec, baseline -0.1 – 0 sec, ~74.1 MB, data loaded,
‘5’: 35>,
<EpochsFIF | 38 events (all good), -0.1 - 0.7 sec, baseline -0.1 – 0 sec, ~80.0 MB, data loaded,
‘2’: 38>,
<EpochsFIF | 45 events (all good), -0.1 - 0.7 sec, baseline -0.1 – 0 sec, ~93.8 MB, data loaded,
‘3’: 45>,
<EpochsFIF | 74 events (all good), -0.1 - 0.7 sec, baseline -0.1 – 0 sec, ~150.9 MB, data loaded,
‘6’: 36
‘2’: 38>,
<EpochsFIF | 179 events (all good), -0.1 - 0.7 sec, baseline -0.1 – 0 sec, ~357.5 MB, data loaded,
‘1’: 179>,
<EpochsFIF | 80 events (all good), -0.1 - 0.7 sec, baseline -0.1 – 0 sec, ~162.7 MB, data loaded,
‘3’: 45
‘5’: 35>,
<EpochsFIF | 36 events (all good), -0.1 - 0.7 sec, baseline -0.1 – 0 sec, ~76.1 MB, data loaded,

It seems this list contains epochs with a baseline! So I guess I was wrong looking at the message generated when epochs are read and not the actual object. I still need to figure out why my evoked object shows baseline off, because that is what I’m actually using for the source analysis (stc) later.

(I will also clean up my epochsList, because it contains epochs for event 5, epochs for event 3, and a combination epochs for events 3 and 5 (and similar for 2,6) So the true number of epochs will be 333 instead of 487. This is due to an error in the way the epochs files are found and read.)

Megan