Defining montage for MEG data from scratch

  • MNE version: 1.0.2
  • operating system: Windows 10

I am a new user of both Python and MNE, and working with a public dataset https://drum.lib.umd.edu/handle/1903/21184

The dataset is not a standard dataset, and I struggled to define a montage. I read through too many documents, and I think I should use this method:

mne.channels.make_dig_montage( ch_pos=None , nasion=None , lpa=None , rpa=None , hsp=None , hpi=None , coord_frame=‘unknown’ )

But, the .mat data has 192 columns, 157 columns are MEG data, and 8 columns seem to have location information, but there is no further information. How should I connect these digitized numbers to the required arguments for defining a montage, including ch:pos, nasion, lpa, rpa, etc.

Just, in a reference paper, it is mentioned that "At the beginning of each recording session, each participant’s head shape was digitized with a 3-Space Fastrak system (Polhemus), including 3 fiducial points and 5 marker positions, Neuro-current response functions: A unified approach to MEG source analysis under the continuous stimuli paradigm - PubMed

I would be so grateful if you could help me with that.

Hi, great question and welcome to the forum! Is there any way to get the raw data from the device directly instead of modified into .mat format? Ideally that’s where you’d want to start because you don’t have control over how the data was processed in between when it was recorded and when you got it in .mat format and there might have been mistakes or choices that you would not have made in preprocessing. Plus it’s all this extra work to define the montage when MNE has great file readers for most of the major (I’m not sure maybe all) MEG manufacturers that will do it for you. Just to check before starting a more difficult process that there isn’t an easy solution!

Thank you so much for your guidance. Unfortunately, I do not have the data from the device; I have just the data that I shared its link; it is a .mat data with no further information of files except sampling frequency. I just found that from 192 columns of data, 157 are MEG data, and maybe 8 of them are digitized positions. With this information, I read through many documents but still could not find a standard way to define montage that matches the problem!

To my knowledge, the locations for MEG are stored in raw.info['chs'][idx]['loc'][:3] where idx is the index of the channel. See this code snippet for example:

import os.path as op
import mne

data_path = mne.datasets.sample.data_path()
raw = mne.io.read_raw(op.join(
    data_path, 'MEG', 'sample', 'sample_audvis_filt-0-40_raw.fif'))
print(raw.info['chs'][10])

The info['chs'][idx] has a ch_name as well so you should match it to that. You might have to use:

with raw.info._unlock():
    raw.info['chs'][idx]['loc'][:3] = [x, y, z]

Where x, y and z are the locations for each sensor in meters in device coordinates.

You’ll also want to add the fiducials (right and left pre-auricular points and nasion) so that you can define a device->head transformation as well. You can do that with mne.channels.make_dig_montage and put in the fiducials and any extra head points and EEG channels and then use mne.channels.compute_dev_head_t — MNE 1.1.dev0 documentation.

Thank you so much again. I will read through the documents. The problem here is that there was no info object for the data. I defined an info object, but it does not have the information needed. Is there a way to find standard info object, montage, device coordinates, etc., for such MEG data with 157 channels that work in MNE?
I defined an info object like this:

n_channels=157
sampling_freq=1000
ch_names=[f’MEG{n:03}’ for n in range(1, 158)]
ch_types=[‘grad’]*157
info = mne.create_info(ch_names, ch_types=ch_types, sfreq=sampling_freq)

To my knowledge it’s not like EEG where there is a standard arrangement of sensors like 10-20. There are much fewer MEG manufacturers and, to my knowledge, all have their own helmet configuration. You should be able to get the positions given the manufacturer but I don’t think there is any way to find standaed positions without that information. You could look through the sample data and see which ones have that many sensors but that’s maybe not an assumption I would make.