Some confusion about mne.warp_montage_volume

Hi, Iā€™m following the tutorial locating ieeg, and lastly, we need to do mne.warp_montage_volume, it the doc, it says it needs a base_image , it is

Path to a volumetric scan (e.g. CT) of the subject.  
Local extrema (max or min) should be nearby montage channel locations.

Iā€™m not quite sure about this, is any volumetric scan like T1.mgz, brain.mgz, generated from FreeSurfer good to use, too?

Tagging @alexrockhill

@BarryLiu97 Please add appropriate tags to your postings ā€“ theyā€™re used by some forum members to automatically receive notifications when new posts in their area of expertise pop up. Thanks! Iā€™ve added the ieeg-and-ecog tag here for you.

Sorry! Iā€™ll keep that in mind.

1 Like

Hi @BarryLiu97, the function uses the image to make an elec_image of all the voxels where each contact is. If the contacts are areas of high intensity (or low) then it will work. Some sites use a post-operative MR which will have low-intensity where the contacts are. From your question, I get the sense youā€™re talking about a pre-operative MR which will not work because there are no electrodes.

what if I just what the MNI coordinates, without the elec_image, I didnā€™t see a parameter to choose this, so itā€™s not supportive right now, right?

Itā€™s not supported, correct. Dipy has a PR to take points to do the symmetric diffeomorphic registration but until that merges and is released, it can only morph images, so you need the elec_image.

Hi, I read the code about warp_montage_volume briefly, so here is my understanding.

  1. Change the channel coordinates to RAS, aka the freesurfer voxel space.
  2. Then find each channelā€™s related voxels, or like the doc says " transform them into a volume", through this step, for example, the voxels related to channel 1ā€™s values would be set as 1. The voxels are based on aligned ct, called image_from.
  3. register this image_from to fsaverageā€˜s brain.mgz. Through this step, the channelsā€™ locations are resistered to the MNI space, aka the fsaverage space.
  4. Get the mass center to make each channel a point.
  5. Lastly, use fsaverageā€˜s vox2ras_tkr to convert back to surface RAS coordinate. For the image_from has been already registered to the fsaverage, this step could change the channelsā€™ coordinates to Surface Coordinates, aka mri frame.
    If I misunderstand something wrong somewhere, please correct me.

And Iā€™ve got a little question. Is fsaverage's brain.mgz is in RAS coordinate, so we need to convert the channlesā€™s coordinates from Surface RAS to RAS? I know T1.mgz is in Surface RAS, and orig.mgz is in RAS, Iā€™m not sure about the other files like brain.mgz.

And also, if weā€™ve got the trans_matrix between our T1.mgz and fsaverageā€™s T1.mgz using dipyā€™s registration, why something like

montage.apply_trans(trans_matrix)

not enough?

To answer your first question about Freesurfer images and if they are in RAS or surface RAS, Freesurfer uses surface RAS basically exclusively but itā€™s a bit more complicated as an image is just a 3D matrix with an affine so it has coordinates both in RAS (relative to the center of the 3D matrix) and surface RAS (using Freesurferā€™s slightly different formula). So the image itself isnā€™t in any coordinate frame, itā€™s just a bunch of voxels but points in both RAS and surface RAS are perfectly reasonable to use, you just shouldnā€™t mix them up.

To answer your second question, if you compute an affine registration between an individualā€™s T1 and fsaverage T1, then you have a linear transformation matrix which is perfectly reasonable but not very precise. This is because it stretches, shears, rotates and translates the two brains to match but doesnā€™t map them at a voxel per voxel level so they will generally be aligned but inside the brains individuals are different so that will be unaccounted for (e.g. if I have a larger temporal lobe, it will be mapped to the parietal lobe of fsaverage). In fact, you donā€™t even have to use Dipy to do this, itā€™s already done for you in the Talairach transform computed during the recon-all. See Working with sEEG data ā€” MNE 1.0.dev0 documentation for more details. The point of the symmetric diffeomorphic transform is to match local anatomy as best as possible after the brains are already affine-aligned.

Thank you so much!
As to my understanding of warp_montage_volume function, is it right?

Yes, thatā€™s exactly what is going on in the code, youā€™ve described it spot on.

Well actually let me make these changes:

  • Change the channel coordinates from surface RAS to voxels of the CT and MR image (they are aligned)
  • Then find each channelā€™s related voxels, or like the doc says " transform them into a volume", through this step, for example, the voxels related to channel 1ā€™s values would be set as 1. The voxels are based on aligned ct, called image_from .
  • register this image_from to fsaverage ā€˜s brain.mgz using the symmetric diffeomorphic registration. Through this step, the channelsā€™ locations are resistered to the MNI space, aka the fsaverage space.
  • Get the mass center to make each channel a point.
  • Lastly, use fsaverage ā€˜s vox2ras_tkr to convert back to surface RAS coordinate. For the image_from has been already registered to the fsaverage , this step could change the channelsā€™ coordinates to Surface Coordinates, aka mri frame.

The only changes were that we first have to go to voxels from surface RAS, not to RAS and to say explicitly that weā€™re using the symmetric diffeomorphic registration.

Thanks a lot for your patience!!!
It seems that Iā€™ve understood the voxels, RAS and Surface RAS wrongly.
Like a T1.mgzā€™s dataobj, itā€™s just a 256 x 256 x 256 matrix with values. Using this matrix we could plot it using package like matplotlib with a gray cmap.
if we apply_trans(vox2ras, index_of_a_voxel_in_the_matrix), we will get the RAS coordinate of this voxel.
The point is to convert the channelsā€™ coordinates to the index in the 256 x 256 x 256 matrix.