MNE coregistration misaligned

Hello,

I need assistance with importing my digitized fNIRS sensor positions, adjusting and correcting them, and ensuring proper coregistration. The final positions I obtain do not align correctly with the head template when using make_dig_montage(). I’ve found very few resources on this topic and would greatly appreciate guidance on what I might be doing wrong.

Issue:

After following the process, my sensor positions appear misaligned, with some sensors placed outside the head when plotting the montage. I’m unsure where the issue lies, whether it’s in the initial import, the spherical corrections, or the transformation step.

Environment:

  • MNE Version: 1.3.dev0
  • Operating System: Windows 10

Code Snippet:

#Importing and reading digitized cartesian positions (from Polhemus, made on a spherical model)
fname = 'digit_electrodes.fif'
raw = read_raw_fif(fname, preload=False)
montage = raw.get_montage()
digs = montage.dig
ch_pos = montage.get_positions()['ch_pos']
nasion_pos = montage.get_positions()['nasion']
lpa_pos = montage.get_positions()['lpa']
rpa_pos = montage.get_positions()['rpa']

group_names = list(ch_pos.keys())

# Append 'nasion', 'lpa', and 'rpa' arrays to cartesian_positions
cartesian_positions = np.vstack([ch_pos[ch_name] for ch_name in group_names] +
                                [nasion_pos, lpa_pos, rpa_pos])

#Convert to spherical in order to correct positions my making similar average radius for all sensors and centroid to 0 
sphere_positions = _cart_to_sph(cartesian_positions)
cartesian_pos_centroid = np.average(cartesian_positions, axis=0)
sphere_pos_centroid = _cart_to_sph(cartesian_pos_centroid)
# average the radius and overwrite it
avg_radius = np.average(sphere_positions, axis=0)[0]
sphere_pos_centroid[0, 0] = avg_radius
# convert back to cartesian
pos_centroid = _sph_to_cart(sphere_pos_centroid)[0, :]
sphere_positions[:, 0] = avg_radius
all_source_points=_sph_to_cart(sphere_positions)

Then for some source points which were located on EEG positions (Fp1 and Fp2), I tried to match these source points (fNIRS positions from digitizer) to the targeted points (EEG positions):

 # Sample source and target points

source_points = np.array([[-3.13486046e-02,  9.30543837e-02,  2.78678212e-02], 
                               [ 1.99626270e-02,  9.54158519e-02,  3.02618162e-02]])
target_points = np.array([[-8.355e-02, -2.192e-02, -5.039e-02], 
                               [-8.355e-02, 2.192e-02, 5.039e-02]])

# Estimate the quaternion and scale factor
quat, scale_factor = fit_matched_points(source_points, target_points)

# Convert quaternion to transformation matrix
transformation_matrix = quat_to_affine(quat)
print("Transformation Matrix:\n", transformation_matrix)
    
transi=mne.transforms.Transform(fro='unknown', to='unknown', trans=transformation_matrix)

And I applied this transformation matrix to all my source points:
transformed_points = apply_trans(transi, all_source_points)

However, I obtained suspicious results with sensors placed outside the head when plotting the montage:


#Create and display montage based on skull landmark and optode positions
dig_montage=mne.channels.make_dig_montage(ch_pos=all_source_points,nasion=nasio_pos, lpa=lpa_pos,rpa=rpa_pos, coord_frame='head')
fig=dig_montage.plot()

Can anyone help me identify what might be going wrong here? Is there something I’m missing in the adjustment or transformation steps? Any insights would be highly appreciated.

Thank you very much for your time and help!

Best regards,
Sélima