I canāt seem to find a way to plot grand-averaged ERPs together with shaded confidence intervals. The closest thing I could find is here, though I donāt want the image map.
Can anyone give me a hint? Thanks!
I canāt seem to find a way to plot grand-averaged ERPs together with shaded confidence intervals. The closest thing I could find is here, though I donāt want the image map.
Can anyone give me a hint? Thanks!
You can pass a dict of lists or a list of lists to plot_compare_evokeds
to produce (parametric) confidence bands.
Could you give an example?
# Load example data
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = (mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
.pick_types(meg=False, eeg=True, stim=True)
.load_data()
.filter(l_freq=None, h_freq=40))
raw.set_eeg_reference('average')
events = mne.find_events(raw)
event_id = {'auditory/left': 1,
'auditory/right': 2,
'visual/left': 3,
'visual/right': 4}
epochs = mne.Epochs(raw, events, event_id=event_id, preload=True)
orig_evoked_audio = epochs['auditory'].average()
orig_evoked_visual = epochs['visual'].average()
# Simulate 5 participants
all_evoked_audio = []
all_evoked_visual = []
cov = mne.make_ad_hoc_cov(raw.info)
iir_filter = [0.2, -0.2, 0.04]
for participant in range(5):
evoked_audio = orig_evoked_audio.copy()
evoked_visual = orig_evoked_visual.copy()
mne.simulation.add_noise(evoked_audio, cov=cov,
iir_filter=iir_filter)
mne.simulation.add_noise(evoked_visual, cov=cov,
iir_filter=iir_filter)
all_evoked_audio.append(evoked_audio)
all_evoked_visual.append(evoked_visual)
# Plot evokeds with 95% CIs
# list of lists
mne.viz.plot_compare_evokeds([all_evoked_audio,
all_evoked_visual],
ci=0.95)
# dict of lists
mne.viz.plot_compare_evokeds(dict(audio=all_evoked_audio,
visual=all_evoked_visual),
ci=0.95)
Thanks, @richard!
I did a similar thing with my data, but I still donāt get shaded plotting of the CIsā¦ do I also need to tweak the color
and/or cmap
arguments of the function?
No, it should work out of the box. You can try to request a different CI, e.g. 99%, by passing ci=0.99
. This should make the confidence bands wider. If you still donāt see anything, then something is wrong with your data ā maybe all evokeds are the same?
Oops ā I mistakenly plotted the grand-averages and not the evokeds. Thank you!!
Quick question: how can I plot the grand average and its CIs of a single condition?
Right now, I have all by-subjects evokeds of one condition listed in a single variable. I would like to plot the grand average of all evokeds for that condition? The module mne.viz.plot_compare_evokeds()
seems to parse each subject as a different condition, but what I want is that the evokeds of all subjects were averaged together.
Thanks!
Pass a list of your list of evokeds, or a dict of you lists of evokeds, as the first parameter of plot_compare_evokeds()
.
Quoting the documentation:
ā¦ If a [dict/list] of lists, the unweighted mean is plotted as a time series and the parametric confidence interval is plotted as a shaded area. All instances must have the same shape - channel numbers, time points etc. If dict, keys must be of type str.
So you could do something like:
mne.viz.plot_compare_evokeds({āMeanā: list_of_evokeds}, ...)
Thanks!
Two follow-up questions, if I may:
Is there a way (within or outside mne.viz.plot_compare_evokeds()
) to force tick labels of either axis. E.g., in my case the tick labels of the y-axis are printed by multiples of 2 (0, 2, 4, 6, etc.), but I would like to have all integers (0, 1, 2, 3, 4, etc.)
I am having trouble understanding how to plot multiple channels as subplots. The documentation says:
Axes
object to plot into. If plotting multiple channel types (or multiple channels whencombine=None
),axes
should be a list of appropriate length containingAxes
objects. If'topo'
, a newFigure
is created with one axis for each channel, in a topographical layout. IfNone
, a newFigure
is created for each channel type. Defaults toNone
.
So, I tried to do the following:
picks = ['E24', 'E36', 'E44', 'E69', 'E87']
axes = [plt.axes() for pick in picks]
mne.viz.plot_compare_evokeds({'mean': allEvokeds}, ylim=dict(eeg=[-5,5]), picks=picks,
title=None, axes=axes, ci=0.95, show_sensors=False,
truncate_yaxis = False, truncate_xaxis=False, legend=False,
colors = {'mean': '#808080'}, combine=None)
But I must be doing something wrong, as no plot appears.
Thank you in advance!
For your second question, this part of the docstring is relevant:
If
combine
isNone
, channels are combined by computing GFP,
unlesspicks
is a single channel (not channel type) or
axes='topo'
, in which cases no combining is performed.
In other words, you canāt pass in 5 picks and a list of 5 axes, and expect to get each channel on a separate subplot (itās a reasonable thing to want to do, but itās just not how the function is written).
Something like this ought to work:
picks = ['E24', 'E36', 'E44', 'E69', 'E87']
fig, axes = plt.subplots(5, 1)
for pick, ax in zip(picks, axes):
mne.viz.plot_compare_evokeds(evoked, combine=None, picks=pick, axes=ax)