Hello all,
I am trying to apply the 30_epochs_metadata.py tutorial to my eye tracking data. It is not the intended application, but I thought it could be useful and stumbled across a problem. Maybe it is of interest.
MNE version: e.g. 1.5.1
operating system: e.g. macOS 13
I can get an Epochs instance and also assign the metadata to it,
however the plotting doesn’t work:
evokeds = dict()
query = "x == {}"
for sample_orientation in epochs.metadata['x'].unique():
evokeds[str(x)]
= epochs[query.format(x)].average()
mne.viz.plot_compare_evokeds(evokeds, cmap=("x in [°]", "viridis"), picks ='xpos_left') # check channel names using: epochs.ch_names; only takes one channel
python returns following error:
AssertionError Traceback (most recent call last)
Cell In[32], line 6
3 for x in epochs.metadata['x'].unique():
4 evoked[str(x)] = epochs[query.format(x)].average()
----> 6 mne.viz.plot_compare_evokeds(evoked, cmap=("x in [°]", "viridis"), picks = 'xpos_left')
...
ref='~/opt/anaconda3/envs/orientation_collapse_analysis/lib/python3.11/site-packages/mne/viz/evoked.py:1'>1</a>;32m 3001 # From now on there is only 1 channel type
-> 3002 assert len(ch_types) == 1
3003 ch_type = ch_types[0]
I assume the problem/cause is that (neither) ‘eyegaze’ (nor ‘pupil’) are accepted channel types for the .plot_compare_evokeds() method.
The potential solutions I thought of were to
a) change the plot_compare_evokeds() to accept the eyetracking channel types or
b) try to pretend using eeg channels (with .set_channel_types() to ‘eeg’ ) or
c) find a dedicated function that can do the same as .plot_compare_evokeds() for eyetracking data (i there is one)
Thus, len(ch_types) is equal to 0 on your dataset.
The real question is, why are you trying to compare evokeds on eye-tracking channels? What is the use-case behind it and does it make sense?
If so, it’s trivial to add those 2 channel types to the list of allowed channels.
@scott-huberty What do you think? @larsoner@drammock Do you think we should consider eyegaze and pupil channels as data channels entirely?
EDIT: MWE
from mne import make_fixed_length_epochs
from mne.datasets import testing
from mne.io import read_raw_eyelink
from mne.viz import plot_compare_evokeds
# download a sample eyelink file
fname = testing.data_path() / "eyetrack" / "test_eyelink.asc"
raw = read_raw_eyelink(fname)
epochs = make_fixed_length_epochs(raw, duration=1.0, preload=True)
evoked1 = epochs[:10].average()
evoked2 = epochs[10:20].average()
plot_compare_evokeds([evoked1, evoked2])
I would advise against this! distinctions between channel types are important for MNE internals, and this type of hacking can lead to strange and hard to debug resutls.
Off the top of my head I think yeah we could allow these channel types. I can see a use case for comparing evoked pupil size across conditions. Eyegaze I’m not quite as sure, and people need to make sure they remember not to baseline correct eyegaze channels when epoching them (I don’t think baseline correcting really make sense here and it makes the coordinate values non-intuitive?).
From what I remember when we first integrated eyetrack channels, the MNE glossary states that data channels should be brain data (i.e. EEG, MEG, fNIRS), so this is why we explicitly didn’t include eyetrack channels as data channels. and IIRC, trying to include eyetrack channels as data channels caused a whole lot of other tests to fail. But I’m happy to hear what others think!
One use case is comparing average pupil dilation time courses for two experimental conditions; pupil dilation is often interpreted as a proxy measure for arousal/effort/cognitive load. As @scott-huberty said it’s a little less clear for eyegaze but I can imagine maybe an experiment where how far/fast the eye moves up/down in different conditions could compare evokeds of the Y component of gaze.
Exactly, pupil dilation can serve as a marker of arousal etc.
I thought the gaze data could plotted using that function to compare whether the gaze is focussed to the correct position or orientation across conditions and time. It could be a nice sanity check too in that regard.
I cannot say, however, if that is a functionality many would need or appreciate.