ValueError: connectivity must be of the correct size

Hello MNE list I hope you are well,

I have been using the spatio-temporal cluster analysis of MEG sources found in the MNE-Python examples page, however have had no success in getting significant clusters for within or between groups differences for any of my conditions -- even with a p_threshold of .05. I have tried almost everything I can think of to get sig. clusters, but wont bore you with the details, it was getting pretty depressing.

From looking at each of my subjects individually, I think it might have had something to do with inter-subject variability of the all ready diffuse source distributions. My MEG experiments involved visual stimulation that spanned quite widely across the visual field (surround-masking), and now I think I'm paying for that through multiple comparisons overkill!

Anyway, I felt hope anew when I recently encountered the hack Denis Engemann created to do the same type of cluster analysis, only within labels. This will reduce my multiple comparisons problem substantially yeah?

In running this analysis, I have successfully managed to read in each of my subjects labels, tack them to each subjects source/stc file, and then morph them to the average brain I made in MNE-C. While it reads in all my data well, things go awry however, after I run the permutations...

This is the line of code that my label based cluster analysis hates:

T_obs, clusters, cluster_p_values, H0 = clu =\
                spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=2,
                 threshold=f_threshold, n_permutations=n_permutations, tail=1)

After a minute or 2 of running the permutations without hassle, it spits out the following complaint:

707 if connectivity is not None:

--> 708 connectivity = _setup_connectivity(connectivity, n_tests, n_times)
709
710 if (exclude is not None) and not exclude.size == n_tests:

/mne/stats/cluster_level.pyc in _setup_connectivity(connectivity, n_vertices, n_times)

520 else: # use temporal adjacency algorithm
521 if not round(n_vertices / float(connectivity.shape[0])) == n_times:
--> 522 raise ValueError('connectivity must be of the correct size')
523 # we claim to only use upper triangular part... not true here
524 connectivity = (connectivity + connectivity.transpose()).tocsr()

ValueError: connectivity must be of the correct size

From my limited understanding, I figure that one or more of my labels are broken somehow. I don't understand what is meant by 'connectivity must be of the correct size', or what I need to do to my subjects labels in order to run the permutations succesfully.

I really wouldn't know where to begin on how to overcome this issue, and would be very grateful for any clarification on what is meant by 'connectivity must be of the correct size', and what I need to do to fix this.

Thank you in advance,

Nikki

Sorry for getting back to you so late Niki,

I've been traveling lately and this one somehow slipped through my
attention networks unnoticed.

Hello MNE list I hope you are well,

I have been using the spatio-temporal cluster analysis of MEG sources
found in the MNE-Python examples page, however have had no success in
getting significant clusters for within or between groups differences for
any of my conditions -- even with a p_threshold of .05. I have tried almost
everything I can think of to get sig. clusters, but wont bore you with the
details, it was getting pretty depressing.

Did you in the first place look at the uncorrected T/F-whatsoever
statistical map? Can you see something that looks like a signal? Have you
tried sensor space analyses?

From looking at each of my subjects individually, I think it might have
had something to do with inter-subject variability of the all ready
diffuse source distributions. My MEG experiments involved visual
stimulation that spanned quite widely across the visual field
(surround-masking), and now I think I'm paying for that through multiple
comparisons overkill!

Did you consider analyzing the subejcts individually to see what kind of
effects you get there?
Did you consider a multivariate pattern analysis (e.g. decoding)? It can
help beutralize between-subject variability.
Btw. which kind of inverse solution do you use, what does your general
pipeline look like?
Did you check intermediate steps of your analysis, correctness of
coregistration, artefact rejection, etc/. Have you looked at the whitening
of the cov, e.g. evoked.plot_whit(cov) to detect scaling issues?
Sure you found all bad channels? Etc., etc.

Anyway, I felt hope anew when I recently encountered the hack Denis

Engemann created to do the same type of cluster analysis, only within
labels. This will reduce my multiple comparisons problem substantially yeah?

I am not sure you have multiple comparisons problem (MCP). The clustering
permutation test already reduces your MCP as you have clusterwise
hypotheses, not voxel/vertex-wise. Sounds like you have problem with SNR
and between-subjects variability. Maybe some of your data-processing is
broken. MEG is not so forgiving to early errors in your processing chain ...

*plot_cluster_stats_spatio_temporal_tris.py · GitHub*
<https://gist.github.com/dengemann/ea482183be869568412c&gt;

In running this analysis, I have successfully managed to read in each of
my subjects labels, tack them to each subjects source/stc file, and then
morph them to the average brain I made in MNE-C. While it reads in all my
data well, things go awry however, after I run the permutations...

This is the line of code that my label based cluster analysis hates:

T_obs, clusters, cluster_p_values, H0 = clu =\
                spatio_temporal_cluster_test(X, connectivity=connectivity,
n_jobs=2,
                 threshold=f_threshold, n_permutations=n_permutations,
tail=1)

After a minute or 2 of running the permutations without hassle, it spits
out the following complaint:

707 if connectivity is not None:

--> 708 connectivity = _setup_connectivity(connectivity, n_tests, n_times)
709
710 if (exclude is not None) and not exclude.size == n_tests:

/mne/stats/cluster_level.pyc in _setup_connectivity(connectivity,
n_vertices, n_times)

520 else: # use temporal adjacency algorithm
521 if not round(n_vertices / float(connectivity.shape[0])) == n_times:
--> 522 raise ValueError('connectivity must be of the correct size')
523 # we claim to only use upper triangular part... not true here
524 connectivity = (connectivity + connectivity.transpose()).tocsr()

ValueError: connectivity must be of the correct size

This tells you that somewhere you have a your data and the adjacency matix
don't match, number of nodes is not the number of spatial features.

From my limited understanding, I figure that one or more of my labels
are broken somehow. I don't understand what is meant by 'connectivity must
be of the correct size', or what I need to do to my subjects labels in
order to run the permutations succesfully.

if you have let's say 100 vertices and 100 time points, your connectivity

would be 100 in with the spatial connectivity trick (used in the
spatio_temporal_XXX functions).
What's the shape of your X?

I really wouldn't know where to begin on how to overcome this issue, and
would be very grateful for any clarification on what is meant by
'connectivity must be of the correct size', and what I need to do to fix
this.

Could you share some code? It would make it easier to help you. Send it
privately if you feel more comfortable like that. It would also be good to
learn more about your protocol / data, e.g. number of conditions and
trials, etc.

Cheers,
Denis

Thank you in advance,

Nikki
______________________________________________________
Nicola Jastrzebski

PhD candidate
Brain and Psychological Sciences Research Centre (BPsyC)

Swinburne University of Technology - Hawthorn campus
______________________________________________________

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

The information in this e-mail is intended only for the person to whom it
is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you in
error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20151028/8610e324/attachment-0001.html

Hi Denis,

Thank you kindly for your responses to my questions, they have been very clarifying for me :slight_smile:

I have to confess that I haven't yet analysed my data in sensor space, and admit now that I have probably made an
unrealistically big leap in going straight to source analysis, given my lack of experience with signal processing methods, and above all, my incomplete understanding of source modelling. Im still really new at this...

What I would like to do however, is give this within-labels cluster analysis one more go before moving to sensor space -- I really like the idea of it. In terms of SNR issues, I tried to ensure (to the best of my ability at least) that my MEG/MRI co-registrations were aligned as possible; and had standardised/baseline corrected the evokeds of each of subjects so that they were all scaled in the same way.

Please find the code I have been using to run the source analysis below. It is a 2-samples permutation test (between groups), with the shape of X1 (group 1) being 20, 401, 20484 and X2 (group 2) being 13, 401, 20484 respectively:

X1 = None
X2 = None

n_subjects1 = len(list_of_list_of_stc_fnames[0])
n_subjects2 = len(list_of_list_of_stc_fnames[1])

for group in [0,1]:
        print
        group_names = list_of_list_of_stc_fnames[group] # Read in stc files
        for subject_num in range(len(group_names)):
                stc_fname = group_names[subject_num]
                stc = mne.read_source_estimate(data_root+stc_fname)
                stc.crop(0, None)
                tstep = stc.tstep
                print stc_fname
                subj_fname = stc_fname[:5]+'Fs'

                if os.path.isfile(subj_fname): # Read in labels
                    label = mne.read_labels_from_annot(subj_fname, parc='aparc',
                                     subjects_dir=subjects_dir, regexp=aparc_label_name)[0]

                label.values.fill(1.0)

                if os.path.isfile(subj_fname): # Morph labels to average brain
                    label = label.morph(subj_fname, subject_to=subject_to,
                                                    smooth=10, grade=5 ,subjects_dir=subjects_dir)
                stc.in_label(label=label)

                if X1 is None: # Make empty array for data to live in

                    X1 = np.zeros( (stc.data.shape[0], stc.data.shape[1], n_subjects1) )
                    X2 = np.zeros( (stc.data.shape[0], stc.data.shape[1], n_subjects2) )

                if group == 0:
                    X1[:,:,subject_num] = stc.data
                else:
                    X2[:,:,subject_num] = stc.data

X1 = np.abs(X1) # only magnitude
X2 = np.abs(X2)