Question re sEEG source localization using MNE

Hello,
I would like to replicate the tutorial on source localization using sEEG data (https://mne.tools/stable/auto_tutorials/clinical/20_seeg.html) with our data.

Below is what we have,

  1. We have a CT image that has the sEEG electrodes, and a T1 MRI registered to the CT space
  2. We then extracted sEEG electrode coordinates (in CT space)
  3. We also have freesurfer outputs of the T1 MRI (in CT space)
  4. We identified a seizure segment in the sEEG data that we want to source localize

I am very familiar with EEG processing using MNE. But I am unsure what other inputs are needed to make this work. Particularly, which space should the MRI and the sEEG coordinates be in.

Could you please kindly advise us on how to replicate your analysis using our data?

Thanks in advance

Yoga

Hi Yoga,

If you have the electrode contact coordinates in CT space, you could really need to transform them to MR space since you want to source localize to the brain. To do that, you need to apply the CT affine to the coordinates, then the CT-MR alignment affine and then the inverse of the T1 affine as in here Locating intracranial electrode contacts — MNE 1.0.dev0 documentation in the manual alignment section.

At that point, your electrode contact coordinates would be in MRI space (this is Freesurfer surface RAS space) and then you can follow the steps in the tutorial such as making a source space using the source vertices closest to each contact.

I think you’re really close, you just need your contacts in MRI coordinates. Let me know if that helps or you run into any more issues.

Best,

Alex

Thanks, Alex. Let me give this a try and get back to you…

Yoga

This worked, thanks. I was able to align the CT image to MRI and get the electrode coordinates in the MR space. However, I still have a question. How can I compute the head coordinates from this? It seems that the raw EEG object is supposed to have the electrode positions in the head coordinate system. But I do not have the localizer information from the EDF file.
Can I bypass the head coordinate conversion and directly perform source localization?

Hi, if you have FreeSurfer reconstruction result, this could easily be done. I have code for this.

def get_montage(ch_pos, subject, subjects_dir):
    """Get montage given Surface RAS (aka mri coordinates in MNE)
    Parameters
    ----------
    ch_pos : dict
        Dictionary of channel positions. Keys are channel names and values
        are 3D coordinates - array of shape (3,) - in native digitizer space
        in m.
    subject : str
        the name of subject in FreeSurfer
    subjects_dir : str
        the directory of your FreeSurfer subject directory

    Returns : head montage
    -------
        
    """
    subj_trans = mne.coreg.estimate_head_mri_t(subject, subjects_dir)
    mri_to_head_trans = mne.transforms.invert_transform(subj_trans)
    print('Start transforming mri to head')
    print(mri_to_head_trans)

    montage_mri = mne.channels.make_dig_montage(ch_pos, coord_frame='mri')
    montage = montage_mri.copy()
    montage.add_estimated_fiducials(subject, subjects_dir)
    montage.apply_trans(mri_to_head_trans)
    return montage_mri, montage

if your FreeSurfer’s subject directory is /home/yoga/subjects, and the name of subject is somebody, the you just need to do

_, montage = get_montage(ch_pos, 'somebody', '/home/yoga/subjects')
raw.set_montage(montage)

The ch_pos id a dict, its key is the name of channels, and the value is the coordinates of Surface RAS. Like
ch_pos = {‘A1' : np.arrary([1, 2, 3])}

1 Like

However, if you use the Locating GUI from MNE, you should already have the head coordinates in the variable of the iEEG data.
You could get the coordinates in the iEEG using

ch_names, chs = raw.info['ch_names'], raw.info['chs']
pos = np.asarray([chs[i]['loc'][:3] for i in range(len(ch_names))])
1 Like

Hi,
Thanks for the code. I tried your code, but still cannot get it to work. Here is my code for the whole pipeline. Plot of the montage does not look right. I am not sure what is wrong.

# Load CT and T1 images
CT_orig = nib.load('CT.nii.gz')
T1 = nib.load('T1.nii.gz')

# volume registration
reg_affine, _ = mne.transforms.compute_volume_registration(
     CT_orig, T1, pipeline='all')

# create a CT to MR transformation
ct_to_mri_transform = mne.transforms.Transform('unknown', 'mri', trans=reg_affine)

# load CSV with electrode coordinates in CT space (in voxels)
coord_csv = pd.read_csv('electrode_coords.csv')
ct_pos = coord_csv[['X','Y','Z']].values

# apply CT affine to get channel positions in scanner coordinates
ct_pos_sc = nib.affines.apply_affine(CT_orig.affine, ct_pos)

# create ch_pos in CT coordinates (in m)
ch_pos = dict(zip(coord_csv['Name'].values, ct_pos_sc/1000.0)) # divide by 1000 to convert to meters

# create montage in CT space
montage_ct = mne.channels.make_dig_montage(ch_pos, coord_frame='unknown')

# apply CT to MR transform
montage_mri = montage_ct.copy()
montage_mri.apply_trans(ct_to_mri_transform)

# add fiducials
montage = montage_mri.copy() 
montage.add_estimated_fiducials(subject, subjects_dir)

# estimate head to MR transform and its inverse
subj_trans = mne.coreg.estimate_head_mri_t(subject, subjects_dir)
mri_to_head_trans = mne.transforms.invert_transform(subj_trans)

# apply MR to head transform
montage.apply_trans(mri_to_head_trans)
montage.plot()

Please use

reg_affine, _ = mne.transforms.compute_volume_registration(
     CT_orig, T1, pipeline='rigid')

‘rigid’ registration does not align well. So I tried ‘all’. I understand ‘all’ does include sdr and may be thats why the CT to MRI transformation using just the reg_affine is probably not correct. But I also tried this with ‘affines’ pipeline. That did not work either.
Is only ‘rigid’ the correct approach?

If the CT and MRI belong to the same subject, then this means the skull of these two images should have the same shape. Rigid registration is used to align two images in this situation for it just translate and rotate the moving images.
Actually, when locating iEEG, you should only use Rigid registration otherwise the absolute distance of the contacts will be changed which is very bad.
The affine and SDR registration is often used when you want to align the brain issues for the issues are soft, it always has different shapes even in the same subject.
The reason for your failure is the bad initial condition of MRI and CT. The solution of this has already been proposed by Alex above, you need to look at the Note of Locating intracranial electrode contacts — MNE 1.0.dev0 documentation. To be short, you need to do a manual alignment in Freeview, then get the registration matrix of this manual alignment as the start, then do Rigid using dipy.

Thank you. I was able to figure this out finally. It seems that the MRI coordinates should be freesurfer coordinates. I was using native scanner coordinates, that was the issue.

2 Likes