I’m getting confused about sensor location. Take this example:
from mne.datasets import testing
from mne.io import read_raw_fif
directory = testing.data_path() / "MEG" / "sample"
fname = directory / "sample_audvis_trunc_raw.fif"
raw = read_raw_fif(fname, preload=False)
raw.pick_types(meg=True, eeg=True, eog=True) # 365 channels
The montage set on this dataset, raw.info.get_montage(), reads as <DigMontage | 78 extras (headshape), 4 HPIs, 3 fiducials, 59 channels>. Where is the information about the MEG sensor location stored if it’s not in a montage?
Since I can plot topographic maps from this data with the grad or mag sensor set which results in a skirt outline, information about the sensors must be present somewhere, right?
EDIT: I’m guessing raw.info["chs"][...]["loc"] but why hide it there instead of exposing it as part of the montage API?
There is no corresponding make_meg_layout() function because sensor locations are fixed in an MEG system (unlike in EEG, where sensor caps deform to fit snugly on a specific head).
I suspect (but am not certain) that MNE automatically loads the appropriate builtin layout for a given MEG manufacturer via the read_layout() function upon loading MEG data.
Edit: Layouts are for 2D representations of sensors only. So I don’t think my response actually answers your question, which was about 3D coordinates. Again, I assume that these are just built-in values, or are directly stored in the MEG files by the recording system, from which MNE extracts the values upon reading.
Montages contain sensor positions in 3D (x, y, z in meters), which can be assigned to existing EEG/MEG data.
So a montage contains the location of a sensor as (x, y, z) coordinates. And MEG is mentioned here…
But actually, the channel location is stored in raw.info["chs"][idx]["loc"] (at least that’s what it looks like to me). The description in the docstring of mne.Info gives some information:
Channel location. For MEG this is the position plus the normal given by a 3x3 rotation matrix. For EEG this is the position followed by reference position (with 6 unused). The values are specified in device coordinates for MEG and in head coordinates for EEG channels, respectively.
That’s more or less the information I was looking for initially.
But I get that the sensor position for a MEG device is fixed, so you don’t really need an API to make those locations accessible to the users as for EEG through montage.
an MEG sensors is not a focal point (it requires to define integration points). This is defined by the
coil type.
As the helmet is rigid the projection to 2D only depends on the device and not the data. The concept
of layout (taken from Fieldtrip) allows to get 2D locations for sensors but also a box to plot a line plot
or a time frequency plane.