Hi! Alex,
As my another question posted here Multivariate noise normalization in MNE - #3 by YuZhou
I want to normalize the MEG patterns/data by means of a (co)variance matrix to improve the decoding performance, because according to this paper https://www.sciencedirect.com/science/article/pii/S1053811918301411
appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures, that’s the reason I want to find a way to do this in MNE.
I followed the suggestion you posted in my another question Multivariate noise normalization in MNE - #3 by YuZhou
using the compute_covariance(), because you said that this function serves as spatial whitening in MNE, but sadly, I don’t know how to apply the obtained noise covariance object to the epoched data
In fact, the code this paper used is like this:
preload_result = False # for recomputing the decoding analyses, set to False
if preload_result:
result = np.load(os.path.join(root, 'result_decoding.npy'))
else:
result = np.full((n_sessions, n_perm, n_conditions, n_conditions, n_time), np.nan,
dtype={'names': ['svm', 'gnb', 'weird', 'lda'], 'formats': 4*['f8']})
for s, session in enumerate(sessions):
print('Session %g / %g' % (s + 1, n_sessions))
X = session['data']
y = session['labels']
cv = CV(y, n_iter=n_perm, n_pseudo=n_pseudo)
for f, (train_indices, test_indices) in enumerate(cv.split(X)):
print('\tPermutation %g / %g' % (f + 1, n_perm))
# 1. Compute pseudo-trials for training and test
Xpseudo_train = np.full((len(train_indices), n_sensors, n_time), np.nan)
Xpseudo_test = np.full((len(test_indices), n_sensors, n_time), np.nan)
for i, ind in enumerate(train_indices):
Xpseudo_train[i, :, :] = np.mean(X[ind, :, :], axis=0)
for i, ind in enumerate(test_indices):
Xpseudo_test[i, :, :] = np.mean(X[ind, :, :], axis=0)
# 2. Whitening using the Epoch method
sigma_conditions = cv.labels_pseudo_train[0, :, n_pseudo-1:].flatten()
sigma_ = np.empty((n_conditions, n_sensors, n_sensors))
for c in range(n_conditions):
# compute sigma for each time point, then average across time
sigma_[c] = np.mean([_cov(Xpseudo_train[sigma_conditions==c, :, t], shrinkage='auto')
for t in range(n_time)], axis=0)
sigma = sigma_.mean(axis=0) # average across conditions
sigma_inv = scipy.linalg.fractional_matrix_power(sigma, -0.5)
Xpseudo_train = (Xpseudo_train.swapaxes(1, 2) @ sigma_inv).swapaxes(1, 2)
Xpseudo_test = (Xpseudo_test.swapaxes(1, 2) @ sigma_inv).swapaxes(1, 2)
for t in range(n_time):
for c1 in range(n_conditions-1):
for c2 in range(min(c1 + 1, n_conditions-1), n_conditions):
# 3. Fit the classifier using training data
data_train = Xpseudo_train[cv.ind_pseudo_train[c1, c2], :, t]
svm.fit(data_train, cv.labels_pseudo_train[c1, c2])
gnb.fit(data_train, cv.labels_pseudo_train[c1, c2])
weird.fit(data_train, cv.labels_pseudo_train[c1, c2])
# lda.fit(data_train, cv.labels_pseudo_train[c1, c2])
# 4. Compute and store classification accuracies
data_test = Xpseudo_test[cv.ind_pseudo_test[c1, c2], :, t]
result['svm'][s, f, c1, c2, t] = np.mean(svm.predict(data_test) == cv.labels_pseudo_test[c1, c2]) - 0.5
result['gnb'][s, f, c1, c2, t] = np.mean(gnb.predict(data_test) == cv.labels_pseudo_test[c1, c2]) - 0.5
result['weird'][s, f, c1, c2, t] = np.mean(weird.predict(data_test) == cv.labels_pseudo_test[c1, c2]) - 0.5
It works fine when doing the within-stage decoding, for example, using the data of the perception stage to train the classifier, and test on the data of the perception stage as well. But when it comes to cross-stage decoding, such as using the data of the perception stage to train the classifier but test on the recall stage, this code doesn’t fit this scenario well, so I decide to find a way to do the noise normalization in MNE.