Use MNE's (instead of PySurfer's) brain class to plot conjunction results on cortical surface

Dear all,

I would like to plot the results from a conjunction analysis on a brain surface as is done here. This figure has been created with PySurfer.

Unfortunately, the code that was used to create that figure produces a severe plotting artifact instead of the intended result: The overlay looks speckled as shown here.

Now, Eric Larson has suggested on PySurfer’s GitHub that one might want to try MNE’s brain class instead of PySurfer’s since it apparently has a similar API but uses pyvista as backend.

This does remove the plotting artifact and the surface overlay looks nice and smooth now as long as you are plotting just a single overlay as in this PySurfer example. But since the MNE brain class does not yet support multiple overlays (see doc on Brain.add_data after doing Brain = mne.viz.get_brain_class() and look for as yet unsupported remove_existing argument), this defeats the purpose of the whole endeavour to begin with: I want to plot two overlays from two contrasts and their conjunction. :slightly_smiling_face:

  • How can I achieve the same result with MNE’s Brain class?
  • If you are familiar with PySurfer, how can I get rid of the plotting artifact shown above?

Thanks & best,
Michael

:question: If you have a question or issue with MNE-Python, please include the following info:

  • MNE version: 0.23.0
  • operating system: CentOS Linux release 7.9.2009 (Core)

To reproduce the plotting artifact (but this is actually a PySurfer problem, not an MNE one), create a conda environment with this environment.yml file:

channels:
  - conda-forge
dependencies:
  - python=3.9
  - ipython
  - pip
  - pip:
    - pysurfer

Then simply run the PySurfer example code hyperlinked above.

I find it a bit misleading that MNE’s Brain method add_data mentions supplying a similar interface as add_overlay from pysurfer’s one although without actually overlaying data…
But until the support for this is implemented, here is a possible workaround:

The key is to look for the add_overlay method in the mesh instance of MNE’s Brain class, more specifically a LayeredMesh instance stored in brain._layered_meshes. This will allow the blending of several layers of data with their respective colormaps. You can do something like:

brain.add_label('BA44_exvivo', color="C3", scalar_thresh=.4, alpha=0.) # dummy call for the overlay below to work

brain._layered_meshes['lh'].add_overlay(brain._data['lh']['smooth_mat'].dot(data), colormap='Spectral', opacity=0.3, name='Overlay1', rng=[0, 0.5])

Note that in the code above I manually interpolate source estimated data (in data) to all vertices of the mesh using the smoothing matrix stored in the brain instance.
This add_overlay method does not have any default values so you need to pass a value in for every argument. rng is the range of value on which the colormap should map.

Finally, it turned out that I had to call the add_label method to get this to work. I am setting a full transparent label so as to know show anything. I haven’t looked into why so far, curious to hear if that works for your case.

1 Like

Dear Hugo,

Thanks for your reply. Your solution unfortunately did not work for me. I implemented it in the code in question as follows. This is basically what the example code looks like:

import os.path as op
import numpy as np
from surfer import io
# from surfer import Brain
import mne

print(__doc__)

Brain = mne.viz.get_brain_class()

"""
Initialize the visualization.
"""
brain = Brain("fsaverage", "lh", "inflated", background="white")

"""
Read both of the activation maps in using
surfer's io functions.
"""
sig1 = io.read_scalar_data(op.join('example_data', "lh.sig.nii.gz"))
sig2 = io.read_scalar_data(op.join('example_data', "lh.alt_sig.nii.gz"))

"""
Zero out the vertices that do not meet a threshold.
"""
thresh = 4
sig1[sig1 < thresh] = 0
sig2[sig2 < thresh] = 0

"""
A conjunction is defined as the minimum significance
value between the two maps at each vertex.
"""
conjunct = np.min(np.vstack((sig1, sig2)), axis=0)


"""
Now load the numpy array as an overlay.
Use a high saturation point so that the
blob will largely be colored with lower
values from the lookup table.
"""
brain.add_data(sig1, fmin=4, fmax=30, transparent=True, colormap="Reds")

brain.add_label('BA44_exvivo', color="C3", scalar_thresh=.4, alpha=0.) # dummy call for the overlay below to work
brain._layered_meshes['lh'].add_overlay(brain._data['lh']['smooth_mat'].dot(data), colormap='Spectral', opacity=0.3, name='Overlay1', rng=[0, 0.5])

This returned a "KeyError: ‘smooth_mat’. So, apparently there is no ‘smooth_mat’. The only keys that I can see are called dict_keys([glyph_dataset', 'glyph_mapper', 'glyph_actor', 'array', 'vertices'])

My MNE version is 1.3.0 by the way.

I agree that the documentation is very misleading on the similarity with add_data. This clearly isn’t the case.

For those who are interested: I cheekily circumnavigated the whole matter by postprocessing the resulting images with PIL to achieve a blending of the overlays. Not very elegant, but at least something that works at last!

import os
import numpy as np
from surfer import io
import mne
from tempfile import NamedTemporaryFile
from PIL import Image
import matplotlib.pyplot as plt

op = os.path

# You'll need to change this based on where your PySurfer examples are stored
# and where your Freesurfer subejcts folder is located
base_dir = op.expanduser('~/experiments/PySurfer/examples')
os.environ['SUBJECTS_DIR'] = op.expanduser(
    '~/group/software/freesurfer/freesurfer60/subjects')

# Use brain class from MNE
Brain = mne.viz.get_brain_class()

"""
Read both of the activation maps in using
surfer's io functions.
"""
sig1 = io.read_scalar_data(op.join(base_dir, 'example_data', "lh.sig.nii.gz"))
sig2 = io.read_scalar_data(op.join(base_dir, 'example_data', \
    "lh.alt_sig.nii.gz"))

"""
Zero out the vertices that do not meet a threshold.
"""
thresh = 4
sig1[sig1 < thresh] = 0
sig2[sig2 < thresh] = 0

"""
A conjunction is defined as the minimum significance
value between the two maps at each vertex.
"""
conjunct = np.min(np.vstack((sig1, sig2)), axis=0)

# This plots some surface overlay on a hemisphere, saves it to disk in a
# temporary file, then reads it as a PIL image. The temporary file is deleted
# afterwards.
def plot_overlay_on_brain(overlay: np.ndarray, hemisphere: str, \
    subject_id='fsaverage', surface='inflated', **kwargs):

    with NamedTemporaryFile(suffix='.png') as ntf:

        brain = Brain(subject_id, hemisphere, surface, offscreen=True,
            cortex='low_contrast', background='white')
        brain.add_data(overlay, **kwargs)
        brain.save_image(ntf.name)
        brain.close()
        brain_img = Image.open(ntf.name)

    return brain_img

# Create image of first overlay on brain
img1 = plot_overlay_on_brain(sig1, 'lh', fmin=4, fmid=5, fmax=30,
    transparent=True, colormap="Reds", colorbar=False, alpha=1)

plt.figure()
plt.imshow(np.asarray(img1))

# Create image of second overlay on brain
img2 = plot_overlay_on_brain(sig2, 'lh', fmin=4, fmid=5, fmax=30,
    transparent=True, colormap='Blues', colorbar=False, alpha=1)

plt.figure()
plt.imshow(np.asarray(img2))


# Combine the two
conj_img = Image.blend(img1, img2, .5)

# Et voilĂ !
plt.figure()
plt.imshow(np.asarray(conj_img))

plt.show()

Cheers,
Michael

I noticed you’re reading a NIFTI format file for the data you want to add:

sig1 = io.read_scalar_data(op.join('example_data', "lh.sig.nii.gz"))

Sorry, in my context, I was working with MEG data so it probably did create a smoothing matrix to interpolate between the lower number of vertices in the forward model and the vertices of the full mesh used for display.

If your sig1 already has the same number of vertices as the surface mesh, you could simply add it without this matrix multiplication of the data, so doing:

brain._layered_meshes['lh'].add_overlay(sig1, colormap='Spectral', opacity=0.3, name='Overlay1', rng=[0, 0.5])

Hope this helps!

Thanks, Hugo. In the meantime, I have opened a feature request for a clean solution. You can find it here. Eric Larson sounded quite confident that proper support for this functionality could be implemented soon.