Identifying adjacent sources

Hello,

Is there a straighforward method to figure out, for any cortical source or
vertex, what the adjacent cortical sources/vertices are?

More specifically, I have extracted .amp timecourse files for each
participant and condition that look something like this:
    100779 9.76119e-13 9.8006e-13 1.01097e-12 1.0584e-12
    102810 5.77402e-13 4.74985e-13 6.13183e-13 8.53066e-13
    103338 7.69642e-13 1.13359e-12 1.48948e-12 1.78157e-12
    103374 6.44688e-13 8.72736e-13 1.09166e-12 1.24282e-12
where the first number in each row is the vertex that timecourse comes from
(and the following columns are the current estimates at each timepoint).
I'm interested in doing a permutation-based spatiotemporal clustering test,
so what I'd like to be able to do is, for each of those vertices, to get a
list of which sources are adjacent to it. (Not necessarily which vertices
are adjacent to it, since my understanding is that the cortical sources
represent just a subset of the vertices present in the original
tesselation.) I assume that after triangulation there must be some record
of this, or some way to find it out, but I can't figure out where to find
it. By the way, my structural images were segmented in Freesurfer, so the
triangulations are in .surf files rather than .tri files.

Any advice is greatly appreciated! Thank you,
Steve Politzer-Ahles

Hi Steve,

I have a function for this, it accepts a .surf mesh, a center vertex
number, and a 'distance' d and it returns a list of vertices d vertices
away from the centre vertex (so distance 1 would return a list of every
vertex adjacent to the centre vertex, distance 2 would be the vertices that
form a ring around the adjacent vertices, etc).

I can't remember if it does the full mesh or allows you to search through a
downsampled mesh (ie just the sources you have estimated).

I'll check and send it to the list on monday.

Andy

hi Stephen,

if you use an ico or oct source space you have the triangles for the source
space from which you can extract the adjacency matrix.

See :

http://martinos.org/mne/auto_examples/plot_read_forward.html#example-plot-read-forward-py

for how to access the use_tris in Python (should be the same in matlab)

and the mesh_edges function:

https://github.com/mne-tools/mne-python/blob/master/mne/source_estimate.py#L1419

HTH
Alex

Hi Alexandre,

Thanks for the information. I think I've figured out how to get the source
space with triangulation now (after playing around, I found that in
addition to using the --ico option when running *mne_setup_forward_model*,
when running *mne_do_forward_solution* I also had to use the --src option
to pick out the "<subject>-ico-4-src.fif" source space rather than whatever
it was picking out by default). But now I have two related questions about
my workflow:

Firstly, I am actually computing my source estimates (.stc files) morphed
to one brain. So does that mean that, when figuring out the vertex info
using these functions, I should be using the forward solution for the brain
to which everyone else's was morphed, rather than using each individual
subject's forward solution?

And a related question: so far I have mostly been doing source estimates
morphed to an average brain (using *make_average_subject*, then *
mne_make_morph_maps*, then using the --morph option with *mne_make_movie*).
But I never set up a forward model for the averaged brain (i.e. I didn't
run mne_setup_forward_model or mne_do_forward_solution on that brain); is
that the correct way to do it? Obviously if I didn't set up a forward model
for the averaged brain then I can't use mne.read_forward_solution to get
out the triangles in Python...but I suppose I could just re-do the analysis
and morph the data to some random subject who *does* have a forward model
set up, rather than the average brain, and then I'd be able to do that.

Thanks for your help!
Best,
Steve

hi Steve,

Thanks for the information. I think I've figured out how to get the source
space with triangulation now (after playing around, I found that in addition
to using the --ico option when running mne_setup_forward_model, when running
mne_do_forward_solution I also had to use the --src option to pick out the
"<subject>-ico-4-src.fif" source space rather than whatever it was picking
out by default). But now I have two related questions about my workflow:

Firstly, I am actually computing my source estimates (.stc files) morphed to
one brain. So does that mean that, when figuring out the vertex info using
these functions, I should be using the forward solution for the brain to
which everyone else's was morphed, rather than using each individual
subject's forward solution?

yes exactly.

And a related question: so far I have mostly been doing source estimates
morphed to an average brain (using make_average_subject, then
mne_make_morph_maps, then using the --morph option with mne_make_movie). But
I never set up a forward model for the averaged brain (i.e. I didn't run
mne_setup_forward_model or mne_do_forward_solution on that brain); is that
the correct way to do it? Obviously if I didn't set up a forward model for
the averaged brain then I can't use mne.read_forward_solution to get out the
triangles in Python...

very good questions. I usually use fsaverage but I guess the answer is yes you
should setup a source space for the average brain.

HTH
Alex

Hi Steve,

I did the attached scripts a couple of years ago for a colleague, and looking at it today I can?t work out what I was doing. But I?ve tried it just now and it seems to work, but probably best to check it thoroughly yourself if you want to use it.

The script ?SetupGetAdjacent.m? makes a hashtable containing all the adjacent vertices from a specified surf file (normally in your subjects directory).

At the bottom of this script a function is called (?getadjacent.m?) ? This function accepts the central vertex label (as a number), an integer i (a distance/?radius? from the central vertex you are interested in, measured in vertices), and the hashtable you have just created. The function gives out a) a list of the vertices ringing the centre vertex at a radius i and b) a list of the vertices *inside* this ring.

So if you ask for getadjacent(?4756?, 1, ?myfile.surf?) you get out:

adjacents = the 6 neighbouring vertices to the central vertex

passed = the value of the central vertex itself

if you ask for getadjacent(?4756?, 2, ?myfile.surf?) you get:

adjacents = the 12 vertices forming a ring around the central vertex at with a radius of 2

passed = all the vertices inside this ring.

etc.

I should be clear that, unfortunately, this is not done on a downsampled, ?morphgraded? mesh, ie 642, 2562, 10242 vertices (if you choose ??morphgrade 3,4 or 5? etc), which is what you were asking for. However, due to the way that the surf vertices are labelled, one can get an ?adjacent? list or a ?passed? list that *only* contains vertices that are in the ?morphgraded? mesh by simply deleting any vertices from ?adjacents? or ?passed? which have a label higher than 642 (or 2562, 10242 or whatever the size of your morphgraded source mesh is). So if you did --morphgrade 5 when you did your estimation, and selected a radius of 10 for getadjacent(), then if you delete any vertices in the resulting ?passed? list that are greater than 10242, then you will be left with only ?passed? vertices you have estimation values for.

Hope this is helpful,

a