visualizing sEEG electrode contacts (already in MNI space) on 'fsaverage' surface

Hi,

I am trying to visualize sEEG electrodes on the ‘fsaverage’ brain surface (following the “working with SEEG data” tutorial.).

I am stuck figuring out what coordinate frame transforms I need to apply to the montage – if at all – given that I have electrode locations already in the MNI space (which I believe is different from the tutorial).

Specifically, for each participant, I have a .csv file which contains the MNI coordinates (x, y, z) for each electrode on the sEEG shafts (Neuralynx seeg data). I do not have access to the individual subject’s recon-all data (electrode localization was done in a different lab). Example coordinate data looks like this (I am assuming these values must be millimeters):

print(coords)  # just a dataframe loaded via pd.read_csv
>>>
    Channels      MNI_1     MNI_2      MNI_3
0       LAMu -22.425952 -4.686374 -16.528827
1       LAM1 -25.003134 -4.285802 -19.544462
2       LAM2 -28.077597 -3.557858 -21.753622
3       LAM3 -33.196102 -1.944738 -21.954092
4       LAM4 -38.982658 -1.034873 -24.233653
..       ...        ...       ...        ...
103   RPINS4  32.841654 -3.428255  18.764148
104   RPINS5  31.769347 -1.041556  26.172012
105   RPINS6  31.037502  0.839024  33.411429
106   RPINS7  29.981158  3.350836  42.209800
107   RPINS8  29.024138  5.230155  49.399291

[108 rows x 4 columns]

Conceptually, given that my coordinates are already in the MNI space, I thought that the part of the tutorial that transforms the montage from the “head” → “mri” → “mni” coordinate systems would not be needed in my case.

So, I thought I would be able to create a montage from these values in the csv file (montage = mne.channels.create_dig_montage(coords_dict, "mni_tal"); epochs.set_montage(montage, on_missing='warn')) and plot on ‘fsaverage’ (full code below). But I am clearly missing something as the locations are misaligned and possibly also not correctly mapped left/right:

Plotting the montage itself (montage.plot(), looks more meaningful (at least the 3 shafts that have electrodes whose labels start with L* are on indeed on the left side):

The above suggest my electrode coordinate system is off. Any pointers on how to start troubleshooting this would be much appreciated! I don’t think I can share a MWE due to the nature of the data, but happy to provide more details if anything is unclear.

Full code snippet:

import mne
from mne.io import read_raw_fif
from mne.datasets import fetch_fsaverage
import pandas as pd

# input data
subjects_dir = "*/mne_data/MNE-fsaverage-data"
anat_csv = "/path/to/csv/file/with/mni/coordinates/coords_mni.csv"
raw = read_raw_fif("/path/to/seeg/sub-009_preproc_ieeg.fif")  # preprocessed seeg data

# epoch at start of audio playback
epochs = mne.Epochs(raw, event_id={"wav_playback"}, detrend=1, baseline=None)

# load the coordinate values
coords = pd.read_csv(anat_csv, sep="\t")
# only retain labels and x,y,z values
coords = coords[["Channels", "MNI_1", "MNI_2", "MNI_3"]] 

# convert channel coordinate values to meters
coords[["MNI_1", "MNI_2", "MNI_3"]] /= 1000
 # drop micro channels (labels start with `u`), not interested in these
coords = coords.loc[~coords["Channels"].str.contains("u")]

# create a dict for mne.channels.make_dig_montage()
ch_pos = dict(
    zip(
    coords["Channels"],
    coords[["MNI_1", "MNI_2", "MNI_3"]].to_numpy()
    )
)

# create montage
montage = mne.channels.make_dig_montage(ch_pos=ch_pos, coord_frame="mni_tal")

epochs.set_montage(montage, on_missing="warn")

brain = mne.viz.Brain(
    "fsaverage",
    subjects_dir=subjects_dir,
    background="white",
    units="m",
)
brain.add_sensors(epochs.info, 'fsaverage')

mne.sys_info()

Platform             Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
Python               3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0]
Executable           /home/kriarm/miniconda3/envs/core/bin/python
CPU                  x86_64 (20 cores)
Memory               31.2 GB

Core
├☒ mne               1.7.0.dev158+g925f52282 (outdated, release 1.7.0 is available!)
├☑ numpy             1.26.4 (MKL 2022.1-Product with 10 threads)
├☑ scipy             1.12.0
└☑ matplotlib        3.8.3 (backend=module://matplotlib_inline.backend_inline)

Numerical (optional)
├☑ sklearn           1.4.1.post1
├☑ numba             0.59.0
├☑ nibabel           5.2.0
├☑ nilearn           0.10.2
├☑ dipy              1.9.0
├☑ openmeeg          2.5.7
├☑ pandas            2.2.1
├☑ h5io              0.1.9
├☑ h5py              3.10.0
└☐ unavailable       cupy

Visualization (optional)
├☑ pyvista           0.42.3 (OpenGL 4.5 (Core Profile) Mesa 20.2.6 via llvmpipe (LLVM 11.0.0, 256 bits))
├☑ pyvistaqt         0.11.0
├☑ vtk               9.2.6
├☑ qtpy              2.4.1 (PyQt5=5.15.8)
├☑ ipympl            0.9.3
├☑ pyqtgraph         0.13.3
├☑ mne-qt-browser    0.6.1
├☑ ipywidgets        8.1.1
├☑ trame_client      2.12.6
├☑ trame_server      2.12.1
├☑ trame_vtk         2.6.1
└☑ trame_vuetify     2.3.1

Ecosystem (optional)
├☑ mne-bids          0.14
├☑ neo               0.13.0
├☑ eeglabio          0.0.2-4
├☑ edfio             0.4.0
├☑ mffpy             0.8.0
├☑ pybv              0.7.5
└☐ unavailable       mne-nirs, mne-features, mne-connectivity, mne-icalabel, mne-bids-pipeline

To update to the latest supported release version to get bugfixes and improvements, visit https://mne.tools/stable/install/updating.html

For posterity, will just post that plotting my MNI electrode locations on 'fsaverage' surface directly via brain.add_foci() instead of going via brain.add_sensors() seemed to have worked for me in the end (code and output below).

It does seem, given the tutorial, that brain.add_sensors() would be the preferred MNE way, however, but I couldn’t quite wrap my head around all the coordinate systems transforms. Still happy to receive pointers, though.

Full code and output

  SUBJECTS_DIR = "*/mne_data/MNE-fsaverage-data"
  
  # select left/right
  # (see `coords` data frame in first message)
  left_chans = coords.Channels.str.startswith("L")
  right_chans = coords.Channels.str.startswith("R")

  # convert to (n_elecs, 3) arrays
  xyz_left = coords.loc[left_chans, ["MNI_1", "MNI_2", "MNI_3"]].to_numpy()
  xyz_right = coords.loc[right_chans, ["MNI_1", "MNI_2", "MNI_3"]].to_numpy()

  brain = mne.viz.Brain(
      subject="fsaverage",
      cortex="low_contrast",
      hemi='split',
      alpha=0.25,
      background="white",
      subjects_dir=SUBJECTS_DIR,
      size=(1600, 800),
      show=True,
  )

  brain.add_foci(
      xyz_left,
      hemi="lh",
      color="yellow",
      scale_factor=0.3
  )

  brain.add_foci(
      xyz_right,
      hemi="rh",
      color="yellow",
      scale_factor=0.3
  )

1 Like