How to apply the cov.Covariance object to epoched data

  • MNE-Python version: 0.24.dev0
  • operating system: Linux

Hi! Guys,

I computed the noise covariance of the epoch data:

noise_covs = compute_covariance(epoch, tmin=None, tmax=0, method='ledoit_wolf',  return_estimators=True, verbose=True, n_jobs=-1, projs=proj, rank='info')

I’m wondering is there a way to apply this noise_covs to the epoched data, so that the epoched data will be normalized. I searched the methods of the mne.Epoch, but didn’t found one, did I miss it? Any suggestions?

Thanks a lot! :sparkling_heart: :sparkling_heart: :sparkling_heart:

there is some code to whiten evoked data in mne but it’s used for viz or inside function that need it

see

https://mne.tools/dev/auto_examples/visualization/evoked_whitening.html

why do you need this?

Alex

1 Like

There is also mne.cov.compute_whitener — MNE 0.23.0 documentation which might help depending on what exactly you want to do.

1 Like

Hi! Alex,
As my another question posted here Multivariate noise normalization in MNE - #3 by YuZhou
I want to normalize the MEG patterns/data by means of a (co)variance matrix to improve the decoding performance, because according to this paper https://www.sciencedirect.com/science/article/pii/S1053811918301411
appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures, that’s the reason I want to find a way to do this in MNE.
I followed the suggestion you posted in my another question Multivariate noise normalization in MNE - #3 by YuZhou
using the compute_covariance(), because you said that this function serves as spatial whitening in MNE, but sadly, I don’t know how to apply the obtained noise covariance object to the epoched data :slightly_frowning_face:
In fact, the code this paper used is like this:

preload_result = False # for recomputing the decoding analyses, set to False
if preload_result:
    result = np.load(os.path.join(root, 'result_decoding.npy'))
else:
    result = np.full((n_sessions, n_perm, n_conditions, n_conditions, n_time), np.nan,
                     dtype={'names': ['svm', 'gnb', 'weird', 'lda'], 'formats': 4*['f8']})

    for s, session in enumerate(sessions):

        print('Session %g / %g' % (s + 1, n_sessions))

        X = session['data']
        y = session['labels']

        cv = CV(y, n_iter=n_perm, n_pseudo=n_pseudo)

        for f, (train_indices, test_indices) in enumerate(cv.split(X)):
            print('\tPermutation %g / %g' % (f + 1, n_perm))

            # 1. Compute pseudo-trials for training and test
            Xpseudo_train = np.full((len(train_indices), n_sensors, n_time), np.nan)
            Xpseudo_test = np.full((len(test_indices), n_sensors, n_time), np.nan)
            for i, ind in enumerate(train_indices):
                Xpseudo_train[i, :, :] = np.mean(X[ind, :, :], axis=0)
            for i, ind in enumerate(test_indices):
                Xpseudo_test[i, :, :] = np.mean(X[ind, :, :], axis=0)


            # 2. Whitening using the Epoch method
            sigma_conditions = cv.labels_pseudo_train[0, :, n_pseudo-1:].flatten()
            sigma_ = np.empty((n_conditions, n_sensors, n_sensors))
            for c in range(n_conditions):
                # compute sigma for each time point, then average across time
                sigma_[c] = np.mean([_cov(Xpseudo_train[sigma_conditions==c, :, t], shrinkage='auto')
                                     for t in range(n_time)], axis=0)
            sigma = sigma_.mean(axis=0)  # average across conditions
            sigma_inv = scipy.linalg.fractional_matrix_power(sigma, -0.5)
            Xpseudo_train = (Xpseudo_train.swapaxes(1, 2) @ sigma_inv).swapaxes(1, 2)
            Xpseudo_test = (Xpseudo_test.swapaxes(1, 2) @ sigma_inv).swapaxes(1, 2)

            for t in range(n_time):
                for c1 in range(n_conditions-1):
                    for c2 in range(min(c1 + 1, n_conditions-1), n_conditions):
                            # 3. Fit the classifier using training data
                            data_train = Xpseudo_train[cv.ind_pseudo_train[c1, c2], :, t]
                            svm.fit(data_train, cv.labels_pseudo_train[c1, c2])                            
                            gnb.fit(data_train, cv.labels_pseudo_train[c1, c2])
                            weird.fit(data_train, cv.labels_pseudo_train[c1, c2])
                            # lda.fit(data_train, cv.labels_pseudo_train[c1, c2])

                            # 4. Compute and store classification accuracies
                            data_test = Xpseudo_test[cv.ind_pseudo_test[c1, c2], :, t]
                            result['svm'][s, f, c1, c2, t] = np.mean(svm.predict(data_test) == cv.labels_pseudo_test[c1, c2]) - 0.5                            
                            result['gnb'][s, f, c1, c2, t] = np.mean(gnb.predict(data_test) == cv.labels_pseudo_test[c1, c2]) - 0.5
                            result['weird'][s, f, c1, c2, t] = np.mean(weird.predict(data_test) == cv.labels_pseudo_test[c1, c2]) - 0.5

It works fine when doing the within-stage decoding, for example, using the data of the perception stage to train the classifier, and test on the data of the perception stage as well. But when it comes to cross-stage decoding, such as using the data of the perception stage to train the classifier but test on the recall stage, this code doesn’t fit this scenario well, so I decide to find a way to do the noise normalization in MNE.

Hi!
Thanks a lot for your suggestion!
I tried the mne.cov.compute_whitener, here is my code:

noise_covs = compute_covariance(epoch, tmin=None, tmax=None, method='ledoit_wolf', verbose=True, n_jobs=12, projs=proj, rank='info')

w, ch_names = mne.cov.compute_whitener(noise_covs, picks='meg', rank='info')

the first line worked fine, I got the mne.cov.Covariance obtained by compute_covariance(), but when it comes to mne.cov.compute_whitener(), an error message prompted:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-19-b93d07661d28> in <module>
----> 1 mne.cov.compute_whitener(noise_covs)

<decorator-gen-262> in compute_whitener(noise_cov, info, picks, rank, scalings, return_rank, pca, return_colorer, on_rank_mismatch, verbose)

~/.conda/envs/mne/lib/python3.9/site-packages/mne/cov.py in compute_whitener(noise_cov, info, picks, rank, scalings, return_rank, pca, return_colorer, on_rank_mismatch, verbose)
   1848 
   1849     n_chan = len(ch_names)
-> 1850     assert n_chan == len(noise_cov['eig'])
   1851 
   1852     #   Omit the zeroes due to projection

TypeError: object of type 'NoneType' has no len()

Do you know why this happened?

I kind of give up this function, and used the _cov from sklearn, here is my code:

noise_cov = np.mean([_cov(epo_data[:, :, t], shrinkage='auto') for t in range(n_time)], axis=0)

noise_cov_inv = scipy.linalg.fractional_matrix_power(noise_cov, -0.5)

epo_data = (epo_data.swapaxes(1, 2) @ noise_cov_inv).swapaxes(1, 2)

it works fine now.

Thanks a lot to you guys! :heart: