MEG data analysis in group-level

  • MNE version: 1.5.1
  • operating system: macOS Sonoma 14.3

Dear all,

I am now trying to do data analysis in Python-mne. But when I try to follow the turtorial “Spatiotemporal permutation F-test on full sensor data”, I faced an issue about group level analysis.

In my script, I save all my epochs’ data as list to achieve the group-level analysis. But when I try to concatenate the epochs with mne.concatenate_epochs function, the python report a bug as “ValueError: epochs[1].info[‘dev_head_t’] differs. The instances probably come from different runs, and are therefore associated with different head positions. Manually change info[‘dev_head_t’] to avoid this message but beware that this means the MEG sensors will not be properly spatially aligned. See mne.preprocessing.maxwell_filter to realign the runs to a common head position”.

Following the chatgpt suggestions, I set the dev_head_t to an identity matrix. But in the following analysis, I failed to get a signjficant result. Therefore, I am wondering whether my result is influenced by this setting because it seems that the position information is changed and is not accurate at all after doing this setting. But I don’t know how to analysis the group-level data without doing this. The tutorial only give the single data analysis, not in a group level.

Does anyone know is there any other template about group-level analysis? I am quite confused about this.

I will be really appreciated for your help.

Sincerely,

Could you tell us more about the experiment, the data that you have, and the type of analysis you wish to do exactly?

The tutorial only give the single data analysis, not in a group level.

Quote from the tutorial:

Here, the unit of observation is epochs from a specific study subject. However, the same logic applies when the unit observation is a number of study subject each of whom contribute their own averaged data (i.e., an average of their epochs). This would then be considered an analysis at the “2nd level”.

So perhaps the confusion is that you should be averaging the epochs from each subject, not concatenating them, so in the end you have a list of Evoked objects.

Of course, the issue of head position remains. The way to deal with that is to use the maxwell_filter to virtually move the head position of each subject to a common position. See the destination parameter of the function. You could for example move the virtual head position of each subject to match that of the first subject.

1 Like

Dear Marijn,

Thanks for your reply! My experiment is within subject design. Each subject observe four catagories of pictures and make a judgement. I hope to compare subjects’ response of these four catagories. Now I have collected 12 subjects’ data and hope to see whether there is significant difference among these four different categories in group level. I wish to do Cluster-based permutation tests performed on the ERF time courses from all sensors. I am not so sure whether I have followed the correct turtorial because I am really a beginner in python and data analysis :smiling_face_with_tear:.

Thanks a lot for your help and looking forward to your reply.

Sincerely,
Xindong