I used the dSPM inverse operator with my MEG data and am visualizing the source estimate time course with stc.plot. When looking at the brain “movies” with each subject’s freesurfer, I am wondering how the scale bars are picking the thresholds (i.e., red vs. grey vs. blue, etc.). Can anyone speak to how these scales partitioned into different colors? Thanks in advance.
The colorbar limits are controlled via the clim
parameter of SourceEstimate.plot()
:
Colorbar properties specification. If ‘auto’, set clim automatically based on data percentiles. If dict, should contain:
kind
‘value’ | ‘percent’Flag to specify type of limits.
lims
list | np.ndarray | tuple of float, 3 elementsLower, middle, and upper bounds for colormap.
pos_lims
list | np.ndarray | tuple of float, 3 elementsLower, middle, and upper bound for colormap. Positive values will be mirrored directly across zero during colormap construction to obtain negative control points.
Note
Only one of
lims
orpos_lims
should be provided. Only sequential colormaps should be used withlims
, and only divergent colormaps should be used withpos_lims
.
Unfortunately, the documentation doesn’t mention which values 'auto'
(the default) actually corresponds to.
Best wishes,
Richard
When clim='auto'
the bounds are set at [96, 97.5, 99.95] percentiles. in other words, voxels in the top 0.05% are the max color, voxels in the bottom 96% are the min color (usually transparent), etc.
Thanks for your response. What time course are you using to get the boundaries? Are you using the whole epoch to get these, or just a certain time course within the epoch?
I think it’s based on the entire time course of the SourceEstimate object, across all vertices. (i.e., it’s not based on some baseline period).
Thank you very much @drammock.
One other unrelated question: I am curious as to how the “smoothing_steps” argument in stc.plot works. I understand that you are smoothing the signal based on the specified integer (higher integer being greater smoothing), but is that integer related to the number of time points being smoothed over in the signal time series, or some sort of smoothing coefficient, per say? Thanks so much for your insight!
smoothing steps affects spatial smoothing, not time points. It is useful when your SourceEstimate object has less-than-full-resolution spatial sampling. The options for number of vertices in the source space are seen here: The typical M/EEG workflow — MNE 1.4.0.dev148+g425e3b344 documentation and if your SourceEstimate was based on a SourceSpace with fewer vertices, using smoothing_steps
will “fill in” the unused vertices that lie inbetween the ones where you have actual data.