Channel adjacency for EEG data

I was running spatio_temporal_cluster_1samp_test on my EEG data and noticed differences in results with and without adjacency.

Initially I used this code to compute adjacency, as it doesn’t exist in get_builtin_ch_adjacencies list (I work with the actiCap 10-20 64-electrode system):

adjacency, ch_names = find_ch_adjacency(eeg_info.info, ch_type=“eeg”)
adjacency = combine_adjacency(adjacency, n_times)

Then I used this function:

F_obs, clusters, cluster_pv, H0 = spatio_temporal_cluster_1samp_test(
X, n_permutations=1000, adjacency = adjacency, threshold=None, tail=0,
out_type=‘mask’, n_jobs=1)

After plotting following this tutorial, I got this picture. As you can see, the majority of electrodes are grouped in a cluster, which is rather odd.

After changing the adjacency and not changing the plotting function

adjacency = None

I got a different picture. The number of electrodes in the cluster noticeably decreased. And honestly, this way it seems more realistic compared to almost the whole head being significant.

Now I wonder if adjacency should be specifically computed here or set to None or even False. As I understand, this parameter is more common in the case of MEG studies, so the approach might differ in my case.

P.S. The data itself is fine; this situation is recurring among several of my EEG projects.

Hi Alexandra,

Welcome to the forum.

I assume your X is shape N_patients x N_timepoints x N_channels and you added the correct channel montage for your EEG data and visualised the sensor layout to make sure that it is correct.

If you want to define a cluster in your cluster-permutation test over time and space you only need to set the spatial adjacency explicitly:

adjacency_spatial, _ = find_ch_adjacency(data.info, ch_type=“eeg”)

F_obs, clusters, cluster_pv, H0 = spatio_temporal_cluster_1samp_test(
X, n_permutations=1000, adjacency = adjacency_spatial, threshold=None, tail=0,
out_type=‘mask’, n_jobs=1)

The adjacency over timepoints is automatically added (there is a Note in this tutorial that explains the adjacency defaults for 2D and 3D data).

If you set adjacency to None, a regular lattice adjacency is used to define the cluster, which is a simple grid that connects all neighbouring sensors and timepoints.

If you have the channel layout for your EEG data, I recommend setting up adjacency based on this information.

Setting the right adjacency for cluster-permutation tests is very important, independent of the data type, as it defines which points in your data form a cluster.

The tutorial on statistics has a section (admittedly very short) on adjacency in cluster-permutation tests.

Let me know if that helped.

Cheers,

Carina

Hi Carina!

Thank you so much for your answer!

You assumed correctly about the shape of X, it is indeed N_patients x N_timepoints x N_channels. I used this function for the montage:

 mne.channels.make_standard_montage("standard_1020")

My sensors’ layout looks like this:

I change the adjacency, as you advised.

adjacency_spatial, _ = find_ch_adjacency(analysis_info, ch_type="eeg")

The results were basically the same in terms of number of significant electrodes. I am not sure if this situation is normal.

Probably, I’d better compute the adjacency based on the electrodes’ layout, but I lack understanding of this process and sadly can’t find the tutorial for it. I would be extremely grateful if you can guide me or share the tutorial I can use.

Sincerely,

Alexandra

Hi Alex,

Does your EEG sensor layout match the plotted layout?

It’s difficult to help with the stats if we don’t know what the experiment was about, meaning what we are expecting to see.

Could you share more details on the experimental paradigm and what you are testing, so we can provide better feedback?

Maybe also a quick summary of the preprocessing you have done would be helpful.

Cheers,

Carina

Hi Carina,

Yes, the electrodes’ layout matches the plotted one.

So, the experiment was a classic Flanker task, but we also measured some behavioural metrics potentially influencing the ERP in this task. For preprocessing we filtered the data with a band-pass filter (1-40 Hz) and a notch filter of 50 Hz. We did ICA (excluded eye-related and muscle artefacts) and interpolated bad channels, then dropped noisy epochs.

The original ERP waves looked like this (the 1st topomap is for the congruent condition, the 2nd is for the incongruent condition, and the 3rd is for the difference between them):

As we expect the behavioural metrics to influence the ERP wave, we ran ERP regression models using Unfold.jl (one model for each participant as a first-level analysis). Then we gathered all the beta estimates for all timepoints, electrodes, conditions and predictors and subjected them to permutations (as a second-level analysis).

Here are the plots for the beta estimates for the incongruent condition (congruent was the reference in the model):

As you can see, the topomap for the beta estimates mirrors the difference topomap. And based on all these pictures, I did not expect to see the clusters at ~500 ms and near 700 ms (you can see them in my previous posts) to have almost all electrodes being significant.

Though, I do understand it is only an impression of uniform significance across the cluster when, in fact, different sensors contributed different portions of the effect. Still, it is not what I expected, and I am trying to understand whether it is due to any mistakes.

Sincerely,

Alexandra

Hi Alex,

Thank you for the detailed description. The data looks good, and the fact that the timings of the cluster match the difference we can see in the ERPs is reassuring.

The threshold you set in the cluster_perm function determines the extent of the cluster. You set the threshold to None, which means it is automatically determined based on your N (number of patients) and assumes a t-statistic. I never worked with regression beta values in the cluster permutation test, I am not sure if this is the right way to do it.

Here is a Note on how the threshold can be manually calculated and adjusted.

I suggest first thinking about what would be a sensible threshold for your data/beta estimates.

Later on, you could try threshold-free cluster enhancement if you don’t want to set a manual threshold, but be aware that this is computationally much heavier.

I hope this helps,

Carina