mne.viz.plot_compare_evokeds for eyetracking

Hello all,
I am trying to apply the 30_epochs_metadata.py tutorial to my eye tracking data. It is not the intended application, but I thought it could be useful and stumbled across a problem. Maybe it is of interest.

  • MNE version: e.g. 1.5.1
  • operating system: e.g. macOS 13

I can get an Epochs instance and also assign the metadata to it,
however the plotting doesn’t work:

evokeds = dict()
query = "x == {}"
for sample_orientation in epochs.metadata['x'].unique():
    evokeds[str(x)] 
 = epochs[query.format(x)].average()

mne.viz.plot_compare_evokeds(evokeds, cmap=("x in [°]", "viridis"), picks ='xpos_left') # check channel names using: epochs.ch_names; only takes one channel
    

python returns following error:

AssertionError                            Traceback (most recent call last)
Cell In[32], line 6
      3 for x in epochs.metadata['x'].unique():
      4     evoked[str(x)] = epochs[query.format(x)].average()
----> 6 mne.viz.plot_compare_evokeds(evoked, cmap=("x in [°]", "viridis"), picks = 'xpos_left') 
...
ref='~/opt/anaconda3/envs/orientation_collapse_analysis/lib/python3.11/site-packages/mne/viz/evoked.py:1'>1</a>;32m   3001 # From now on there is only 1 channel type
-> 3002 assert len(ch_types) == 1
   3003 ch_type = ch_types[0]

If I ask for

evoked[x].get_channel_types()

I get the following return

  ['eyegaze', 'eyegaze', 'pupil', 'eyegaze', 'eyegaze', 'pupil']

I assume the problem/cause is that (neither) ‘eyegaze’ (nor ‘pupil’) are accepted channel types for the .plot_compare_evokeds() method.
The potential solutions I thought of were to
a) change the plot_compare_evokeds() to accept the eyetracking channel types or
b) try to pretend using eeg channels :slight_smile: (with .set_channel_types() to ‘eeg’ ) or
c) find a dedicated function that can do the same as .plot_compare_evokeds() for eyetracking data (i there is one)

Thank you.
Best,
Dilara

Hello,

So the issue is that eye-tracking channels are not considered data channels.
They are filtered out here: https://github.com/mne-tools/mne-python/blob/fd53fc44915ee3bea2f18c468eece4ed84476e1d/mne/viz/evoked.py#L2898-L2905

Thus, len(ch_types) is equal to 0 on your dataset.
The real question is, why are you trying to compare evokeds on eye-tracking channels? What is the use-case behind it and does it make sense?
If so, it’s trivial to add those 2 channel types to the list of allowed channels.

@scott-huberty What do you think?
@larsoner @drammock Do you think we should consider eyegaze and pupil channels as data channels entirely?

EDIT: MWE

from mne import make_fixed_length_epochs
from mne.datasets import testing
from mne.io import read_raw_eyelink
from mne.viz import plot_compare_evokeds


# download a sample eyelink file
fname = testing.data_path() / "eyetrack" / "test_eyelink.asc"
raw = read_raw_eyelink(fname)
epochs = make_fixed_length_epochs(raw, duration=1.0, preload=True)
evoked1 = epochs[:10].average()
evoked2 = epochs[10:20].average()
plot_compare_evokeds([evoked1, evoked2])

Mathieu

Sorry @mscheltienne you beat me to it - Just posting the full response I drafted below!

I think you are right about this. I think we would need to explicitly add the eyetrack channels to the code below:

I’ve opened a pull request for discussion: FIX: Allow eyetrack channels to be used with plot_compare_evoked by scott-huberty · Pull Request #12190 · mne-tools/mne-python · GitHub

I would advise against this! distinctions between channel types are important for MNE internals, and this type of hacking can lead to strange and hard to debug resutls.

1 Like

Off the top of my head I think yeah we could allow these channel types. I can see a use case for comparing evoked pupil size across conditions. Eyegaze I’m not quite as sure, and people need to make sure they remember not to baseline correct eyegaze channels when epoching them (I don’t think baseline correcting really make sense here and it makes the coordinate values non-intuitive?).

From what I remember when we first integrated eyetrack channels, the MNE glossary states that data channels should be brain data (i.e. EEG, MEG, fNIRS), so this is why we explicitly didn’t include eyetrack channels as data channels. and IIRC, trying to include eyetrack channels as data channels caused a whole lot of other tests to fail. But I’m happy to hear what others think!

OK, that makes sense then and then eye-tracking channels should be added on a case-by-case basis to the internal functions.

One use case is comparing average pupil dilation time courses for two experimental conditions; pupil dilation is often interpreted as a proxy measure for arousal/effort/cognitive load. As @scott-huberty said it’s a little less clear for eyegaze but I can imagine maybe an experiment where how far/fast the eye moves up/down in different conditions could compare evokeds of the Y component of gaze.

3 Likes

Thank you for your replies.

Exactly, pupil dilation can serve as a marker of arousal etc.
I thought the gaze data could plotted using that function to compare whether the gaze is focussed to the correct position or orientation across conditions and time. It could be a nice sanity check too in that regard.
I cannot say, however, if that is a functionality many would need or appreciate.

1 Like