Does the blue in MVPA source localization show that the activation is less than the computed mean? How about the red (activation above the computed mean)?
In the following link: Decoding (MVPA) — MNE 0.23.0 documentation
You will see that the blues do not have negative values in the temporal generalization matrix, but they do have negative values in the source modeled MVPA, but Why?
Also, what exactly does this temporal generalization matrix show in the link above?
Now don’t get confused here – these figures show very different things, and therefore similar colors don’t have similar meaning. Generalization across time (GAT) trains the classifier at one time point and tests it against all other time points. The resulting classification performance is shown in the GAT heatmap – blue meaning worse, red meaning better performance.All of this is done in sensor space entirely in the above example.
If you have more questions about GAT, I’d suggest you open a separate topic.
In the plot, blue areas indicate current flowing inwards and red areas indicate current flowing outwards
Do inward or outward flowing currents show a different type of activation in the brain? (I am asking this to understand why we need to see them in different colors, i.e. blue and red, if they just refer to some general activation in the brain).
What does that GFP, or sometimes RMS (the dotted line), which usually starts from 0, show in the MVPA source space legend? (see pictures below)
Yes and no. If by activation you mean, whether a particular region of the brain “is doing something”, then the answer is: it doesn’t matter if it’s blue or red; all that’s important is the magnitude of the activation.
If, on the other hand, you mean, could different processes be happening in a given region if it changes color, then the answer is yes, as the overall current flow is now reversed. But keep in mind that thousands to millions of neurons might be involved here so I don’t think we can make any clear statements as to what exactly has changed. (But please, anyone, correct me if I’m mistaken!)
It really depends on your research question and analysis approach. Keeping the signs might give you more statistical power; however, using a fixed dipole orientation (i.e. strictly orthogonal to the cortex surface) comes with drawbacks too, as described in the tutorial I shared above.
If you just want to know which brain area was active and when, I would suggest to start only with the magnitudes of activation first and ignore dipole orientations, just to keep complexity low. I think it will also compute faster that way, but I might be mistaken.
RMS is the root mean square of the signal in sensor space and is essentially a measure of the “agreement” across channels: if, for example, after a stimulus presentation the frontal channels pick up a signal that’s different from what the occipital channels pick up, the RMS value will be larger than when all channels pick up similar signals. Peaks in the RMS trace are often used to find time points of interest while analyzing the data. MNE’s plot_joint() function, for example, finds those peaks and plots the corresponding topomaps.
GFP is global field power and calculated as the population standard deviation across all channels. It’s basically the same as RMS, but with an average-reference applied to the signal first.
Because average-referencing doesn’t make sense in MEG, GFP is only plotted for EEG data, while RMS is plotted for MEG.
We currently don’t do this consistently everywhere: there might still be places where we claim to plot “GFP” but actually show RMS; if you come across anything like this, please let us know.
Thank you for your great response. It was very much helpful as always!
Speaking of reporting issues with MNE (while I am not quite sure if I should report the issues here), I have just noticed that when you change the time_unit parameter for the source estimate object to ‘ms’ instead of the default ‘s’, the resulting source model animation does not play. The timer will keep counting, but the time bar does not move inside the Activation (AU) box.
Another thing that came to my mind is about generating videos from source estimate objects. When the source estimate object is plotted, the generate a video option does not save any videos.
This should work, and if it doesn’t it is a bug. Please open a new bug report issue about this, including a complete code sample to load data and plot the data in a way that shows the error. Use one of the built-in datasets if possible. Please also include the output of mne sys_info (terminal command line) or import mne; mne.sys_info() (from within python console).
This also should work, and if it doesn’t it is a bug. Please open a separate bug report about this.