Causal spectral connectivity measures (per #8691)


Implement Granger causality connectivity measures like DTF, (g)PDC in the mne.connectivity.spectral_connectivity submodule.

See original ticket here:

Causal spectral connectivity measures, e.g. DTF, (g)PDC


For (g)PDC, in order to get close to the reference implementation, it would be necessary to implement multi-variate autoregressive modeling (MVAR) methods into mne, as one of the mode options in the spectral_connectivity submodule.

There are three existing python packages (AFAIK) that implement these causal connectivity measures:

  1. Source Connectivity Toolbox (SCoT):
  2. Connectivipy:
  3. spectral_connectivity:

SCoT is probably the best contender for integration, given that it’s both close to the original implementation (MVAR-based) and is consistent with mne-python's data shape. However, both connectivipy and spectral_connectivity have desirable features that should be considered as well, if the authors of these packages are open to the idea. Per their project page, spectral_connectivity relies on the multitaper Fourier transform, separating the time-frequency transformation from the actual connectivity measures. This is different than the reference PDC implementation, but it could be very useful in certain situations to use a non-parametric connectivity measure.

Another consideration would be projecting fitted MVAR model coefficients into source space, prior to computing the causality measure. There are some advantages to doing this, largely related to computational load. I’m no expert in this field, but expect that there would be some debate as to whether this step is necessary. Given mne-python's existing features, it seems like a good fit, however.

Lastly, there’s the question of what to do with the causality measures - e.g. visualization, statistical analysis - once computed, and whether such applications should also be integrated into mne. From my limited use, mne.viz connectivity plots are currently tailored for non-causal measures, i.e. where the direction of the connectivity is not a factor. The python packages listed above all have their own visualization submodules, but do not implement any 3D visualization techniques. All-to-all connectivity measures, even in just 1 direction (i.e. the lower triangle of the connectivity matrix), are rather unwieldy. Below is a possible scenario where a causal connectivity measure is computed, and the results visualized:

Example use case

  1. M/EEG recordings are made (task or resting state) and a ROI picked.
  2. An MVAR model is fitted to the sensor data or its PCA projection, and then projected into source space.
  3. All-to-all PDC is computed for each estimated source.
  4. To visualize, the user specifies a source dipole, direction (inflow/outflow), number of traces, and the type of surface to display (pial, white, etc.)
  5. The resulting 3D visualization (volume) visualizes the desired connectivity measure between the ROI/source dipole and other dipoles in source space as color-coded tubes (similar to this example, but in source space).

Since mne-python is the de-facto M/EEG analysis package in python, it makes more sense to me to try and implement this functionality according to your standards than to just get a working implementation and be done with it. Again, not an expert (from a ML background), but intend to learn, and would appreciate any help/advice that y’all can offer.


  1. Partial directed coherence: a new concept in neural structure determination
  2. Investigating causality between interacting brain areas with multivariate autoregressive models of MEG sensor data

I’m open to either making SCoT compatible with MNE (which includes adding several example scripts that demonstrate typical workflows) or integrating causal measures into MNE. The latter option will require much more effort, and I’m not sure if MNE devs want this integration. Therefore, adapting SCoT so that it plays nice with MNE is probably the more realistic option for now.

However, that’s not to say that adapting SCoT is trivial. It has been quite some time since I have last looked at the source, and I’m fairly certain that a lot of things need to be refactored. I don’t have the time to do this myself, but I’d be happy to support this process.

I assume that making SCoT compatible with MNE will primarily mean:

  1. Pass or Epoch objects as data inputs for a scot.Workspace object.
  2. Pass a mne.Info object to a workspace instead of separate arguments for locations, fs and the like.
  3. ws.get_connectivity(...) method returns a connectivity array similar to those returned by other mne.connectivity methods, with the exception that both the lower and upper triangles for each frequency point slice (or is it frequency band) will be populated for inflow and outflow.

I agree that integrating SCoT into mne.connectivity.spectral_connectivity, mne.viz and mne.stats would be an entirely different beast, and probably not a good short-term goal. Long-term, maybe, assuming there’s demand for it.

Regarding the 2nd reference I mentioned - projecting MVAR coefficients into source space before calculating connectivity - I would like to implement something at least functionally similar to your do_mvarica method in SCoT. If I’m reading the paper correctly, this would require the following inputs:

  1. Sensor data in PCA space
  2. Coefficient matrix for MVAR model(s) fitted on PCA data
  3. Forward operator (lead field)
  4. Inverse operator
  5. PCA components array & inverse

The reference paper does not describe projection of MVAR coefficients fitted on sensor data into source space, but I imagine it would be trivial compared to the PCA version.

I’m using Freesurfer + mne-python to compute forward & inverse operators. If someone who understands the forward/inverse operator data structures in MNE would like to comment, I’ll probably need some help adapting them for use in SCoT.

This sounds like a good plan, it should be possible to come up with some working prototype fairly quickly. I’d also go with MVARICA, because this is already implemented and it is better than naively decomposing the channel data with ICA (actually, both approaches plus CSPVARICA are implemented so there could be an option to let people choose).

Maybe @mbillingr (the original author who wrote most of the code) can chime in especially regarding the question how to project the sources obtained by MVARICA onto a cortex model?

I’d maybe rethink the workspace-based object-oriented approach; I think functional APIs are simpler and that’s also what MNE uses. You can use the underlying functions directly instead of going through the workspace.

Agreed on using functional API instead of the workspace ooapi; it’s more ‘pythonic’ and closer to MNE.

I hadn’t thought about doing MVAR -> ICA -> Source space instead of just MVAR -> Source space. Suspect my PI will want me to leave out ICA and get as close to the reference paper as possible. Might end up doing both; it would be a good exercise after getting a working prototype.

On a related note, do you know if Dr. Michalareas (first author in paper #2) is involved in the mne-python community? I saw his name mentioned in the MNE-HCP project, but haven’t found much else. His input would be much appreciated too if he has the time.


I’ve been testing out SCoT and trying to think of ways to adapt the MVARICA results to paper #2 referenced above, specifically equations 11 and 14, to project the model coefficients into source space (in my case, a volume). Don’t think it’s as simple a task as including the mixing/unmixing matrices from ICA as part of the projection - that’s getting into unknown territory, although computationally it’s easy to do. Might be worth pursing later.

In the short term, I need a scot.varica.mvar method that provides the MVAR coefficient and noise covariance matrices in sensor space (or PCA-space if that step was done), without doing the ICA step and correcting the coefficients. This should be trivial, but I’ll need some help verifying that it was done correctly. Then I can move on to native support for mne data objects in SCoT.

Sounds good! I’m happy to help, but like I said I also need some time to re-familiarize myself with the code.

Thank you. I’ll mostly need help with the theoretical side. For starters, if you would be able to look at the scot.varica.mvar function I added under my fork of SCoT, and perhaps share it with your colleague so they can check, I’d appreciate it. The primary concern is whether the noise covariance matrix is accurate.

Sure, I’ll do that next week.

1 Like

Hi there!

Quite a while ago I worked on integrating SCoT into MNE. Although this was never followed up you may find some useful pieces in the pull request: MVAR connectivity and example by mbillingr · Pull Request #10 · mne-tools/mne-sandbox · GitHub.

Adapting SCoT to be fully compatible with MNE is also the path I would choose. It is the easier option now, and it reduces friction later if integration into MNE should become desired.

I am not an expert on source projections either, so my view may be too simplistic.
The lead field is the projection of sources into sensor space, right?
PCA/ICA provide transformation matrices from sensor space into component space and back.
If lead field and inverse are available as matrices they can be combined into component<->source transformations.

However, I don’t think ICA is required if you already have a cortex model. These steps should be sufficient:

  1. Sensor data to source space
  2. PCA in source space (assuming source space is high dimensional)
  3. Fit MVAR model in PCA space
  4. Transform MVAR model to source space using PCA’s inverse
1 Like

Hi, recently I am learning connectivity, too. For iEEG data, it may be nice to visualize the connection with the MNI brain if in 3-D.

Slight update:

Updated my fork of SCoT to include MVAR fitting methods: currently supports lstsq (default) and vm (Viera-Morf). Got permission from connectivipy’s author to do this; they seem okay with me adapting their code and offered to help.

Also added opt_einsum as a hard requirement for connectivity measures that previously used numpy.einsum operations. I haven’t finished quantifying how much faster this is, but qualitatively it’s pretty darn fast. Some quick & dirty testing suggests there’s no significant difference in the results, but will need some unit testing to be sure. I like the idea of being able to use Dask for distributed computing in the future, especially when working with large datasets.

Sorry I went off on a tangent, but lab needs necessitated these changes. I’ll get back on track eventually…