@BarryLiu97 Please add appropriate tags to your postings – they’re used by some forum members to automatically receive notifications when new posts in their area of expertise pop up. Thanks! I’ve added the ieeg-and-ecog tag here for you.
Hi @BarryLiu97, the function uses the image to make an elec_image of all the voxels where each contact is. If the contacts are areas of high intensity (or low) then it will work. Some sites use a post-operative MR which will have low-intensity where the contacts are. From your question, I get the sense you’re talking about a pre-operative MR which will not work because there are no electrodes.
It’s not supported, correct. Dipy has a PR to take points to do the symmetric diffeomorphic registration but until that merges and is released, it can only morph images, so you need the elec_image.
Hi, I read the code about warp_montage_volume briefly, so here is my understanding.
Change the channel coordinates to RAS, aka the freesurfer voxel space.
Then find each channel’s related voxels, or like the doc says " transform them into a volume", through this step, for example, the voxels related to channel 1’s values would be set as 1. The voxels are based on aligned ct, called image_from.
register this image_from to fsaverage‘s brain.mgz. Through this step, the channels’ locations are resistered to the MNI space, aka the fsaverage space.
Get the mass center to make each channel a point.
Lastly, use fsaverage‘s vox2ras_tkr to convert back to surface RAS coordinate. For the image_from has been already registered to the fsaverage, this step could change the channels’ coordinates to Surface Coordinates, aka mri frame.
If I misunderstand something wrong somewhere, please correct me.
And I’ve got a little question. Is fsaverage's brain.mgz is in RAS coordinate, so we need to convert the channles’s coordinates from Surface RAS to RAS? I know T1.mgz is in Surface RAS, and orig.mgz is in RAS, I’m not sure about the other files like brain.mgz.
And also, if we’ve got the trans_matrix between our T1.mgz and fsaverage’s T1.mgz using dipy’s registration, why something like
To answer your first question about Freesurfer images and if they are in RAS or surface RAS, Freesurfer uses surface RAS basically exclusively but it’s a bit more complicated as an image is just a 3D matrix with an affine so it has coordinates both in RAS (relative to the center of the 3D matrix) and surface RAS (using Freesurfer’s slightly different formula). So the image itself isn’t in any coordinate frame, it’s just a bunch of voxels but points in both RAS and surface RAS are perfectly reasonable to use, you just shouldn’t mix them up.
To answer your second question, if you compute an affine registration between an individual’s T1 and fsaverage T1, then you have a linear transformation matrix which is perfectly reasonable but not very precise. This is because it stretches, shears, rotates and translates the two brains to match but doesn’t map them at a voxel per voxel level so they will generally be aligned but inside the brains individuals are different so that will be unaccounted for (e.g. if I have a larger temporal lobe, it will be mapped to the parietal lobe of fsaverage). In fact, you don’t even have to use Dipy to do this, it’s already done for you in the Talairach transform computed during the recon-all. See Working with sEEG data — MNE 1.0.dev0 documentation for more details. The point of the symmetric diffeomorphic transform is to match local anatomy as best as possible after the brains are already affine-aligned.
Change the channel coordinates from surface RAS to voxels of the CT and MR image (they are aligned)
Then find each channel’s related voxels, or like the doc says " transform them into a volume", through this step, for example, the voxels related to channel 1’s values would be set as 1. The voxels are based on aligned ct, called image_from .
register this image_from to fsaverage ‘s brain.mgz using the symmetric diffeomorphic registration. Through this step, the channels’ locations are resistered to the MNI space, aka the fsaverage space.
Get the mass center to make each channel a point.
Lastly, use fsaverage ‘s vox2ras_tkr to convert back to surface RAS coordinate. For the image_from has been already registered to the fsaverage , this step could change the channels’ coordinates to Surface Coordinates, aka mri frame.
The only changes were that we first have to go to voxels from surface RAS, not to RAS and to say explicitly that we’re using the symmetric diffeomorphic registration.
Thanks a lot for your patience!!!
It seems that I’ve understood the voxels, RAS and Surface RAS wrongly.
Like a T1.mgz’s dataobj, it’s just a 256 x 256 x 256 matrix with values. Using this matrix we could plot it using package like matplotlib with a gray cmap.
if we apply_trans(vox2ras, index_of_a_voxel_in_the_matrix), we will get the RAS coordinate of this voxel.
The point is to convert the channels’ coordinates to the index in the 256 x 256 x 256 matrix.