What type of contrast (difference) does MVPA source estimate show?

Hi,

I know that MVPA source estimate (stc object) shows the difference (contrast) between two conditions across time, but what is not clear to me is which condition this contrast (difference) belongs to?

For instance, if there are two lists:
a = [1, 2, 3, 4, 5]
b = [1, 2, 3, 4, 5, 6, 7]

The difference between these two lists would be [6, 7] (asymmetric difference), and it is clear that this difference belongs to list ‘b’ and not ‘a’. In MVPA source estimate, when the difference between two conditions is projected onto source space, there is no information regarding which condition this difference belongs to (to my knowledge). Is it just one condition? Or is it more like a symmetric set difference, where the difference could belong to either condition (example below)?

a = {0, 2, 4, 6, 8}
b = {1, 2, 3, 4, 5}
Symmetric difference : [0, 1, 3, 5, 6, 8] (could be from either condition)

Thank you!

Best,
Aqil

Hello @aqil.izadysadr,
could you please clarify what exactly you’re referring to as “MVPA source estimate”?

Is the concept based on this section of our decoding tutorial? Or on this example? Can you provide the code you’re using?

Hi @richard,

Sure! I am referring to the stc plot in this section https://mne.tools/stable/auto_tutorials/machine-learning/50_decoding.html#projecting-sensor-space-patterns-to-source-space

especially the image below:

Here is a snippet of my code that I use to plot my stc objects:

                event_id = {'shape': 2, 'semantic': 3}
                read_raw = mne.io.read_raw_fif(some_raw_file, preload=True)
                read_raw.filter(1, 40)
                
                read_ica = mne.preprocessing.read_ica(some_ica_file, verbose=None)
                read_ica.apply(read_raw)
                
                events = mne.find_events(read_raw)
               
                read_fwd = mne.read_forward_solution(some_fwd_file)
                anat = 'anat'
                
                tmin=0
                tmax=1.0
                epochs = mne.Epochs(read_raw, events, event_id, tmin=tmin, tmax=tmax, proj=True,  picks=('grad'), baseline=(tmin, 0.), preload=True, decim=10)
                epochs.pick_types(meg=True)
                
                X = epochs.get_data()
                y = epochs.events[:, 2]  # target: task 2 vs task 3
                
                clf = make_pipeline(StandardScaler, LinearModel(LogisticRegression(solver='lbfgs')))
                
                time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True)
                time_decod.fit(X, y)
                
                coef = get_coef(time_decod, 'patterns_', inverse_transform=True)
                evoked_time_gen = mne.EvokedArray(coef, epochs.info, tmin=epochs.times[0])

                time_gen = GeneralizingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True)
                scores = cross_val_multiscore(time_gen, X, y, cv=5, n_jobs=1)
                scores = np.mean(scores, axis=0)
                
                cov = mne.compute_covariance(epochs, tmax=None)
                inv = mne.minimum_norm.make_inverse_operator(evoked_time_gen.info, read_fwd, cov, loose=0.8)
                stc = mne.minimum_norm.apply_inverse(evoked_time_gen, inv, 1. / 9., 'dSPM')
                
                brain = stc.plot(subject=anat, hemi='split, views=('lat', 'med, initial_time=0, subjects_dir=sub_dir)This text will be hidden
                brain.add_text(0.1, 0.9, "Projecting src est. grand average to src space for events {}".format(event_id), 'title', font_size=7)

As you can see, the plotted stc object in the image above just shows the contrast or difference between two conditions, but what is not clear to me is whether that difference is asymmetric, symmetric, or something else.

Thank you!

Best,
Aqil

Hi @aqil.izadysadr, in the example you are referring to the pattern of a classifier is projected to source space. In simplified terms a pattern is the reverse of a classifier’s filter - it tells you how to get the EEG/MEG data from the signal extracted by the classifier (the classifier signal is extracted by using the classifier coefficients or the “filter”). You can see this example for a short description of filters vs patterns, and more detailed description is this paper.
BTW the examples you give in your question seem to be set operations, while the classifiers operate on data distributions trying to separate them in multidimensional space. In the mne example you refer to a logistic regression is used (and other classifiers, but only the logistic regression pattern is projected to source space) to differentiate between left and right auditory stimuli. To better understand which category is treated as “positive” by the classifier take a look at how the classes are coded in the y variable.

1 Like

Hi @mmagnuski,

Thank you for your response! It did help clarify things for me. I thought that the classifier was simply indicating the contrast or the difference between two conditions as opposed to separating them in a multidimensional space, but it was quite the contrary. That is why I was looking for the side of the contrast that had the stronger effect on the difference in the projected source space.

Thank you!

Best,

Aqil