# mne_connectivity.envelope_correlation

Hello
I’m a person who is studying brain waves.

I have a question about the envelope_correlation() function of mne_connectivity.
use this function with (orthogonalize=“pairwise”, log=True, absolute=True)

``````def envelope_correlation(data, names=None,
orthogonalize="pairwise",
log=False, absolute=True, verbose=None):
for ei, epoch_data in enumerate(data):
data_mag = np.abs(epoch_data)
data_conj_scaled = epoch_data.conj()
data_conj_scaled /= data_mag
if log:
data_mag *= data_mag
np.log(data_mag, out=data_mag)

# subtract means
data_mag_nomean = data_mag - np.mean(data_mag, axis=-1, keepdims=True)

# compute variances using linalg.norm (square, sum, sqrt) since mean=0
data_mag_std = np.linalg.norm(data_mag_nomean, axis=-1)
data_mag_std[data_mag_std == 0] = 1
corr = np.empty((n_nodes, n_nodes))

# loop over each signal in this specific epoch
# which is now (n_signals, n_times) and compute envelope
for li, label_data in enumerate(epoch_data):
if orthogonalize is False:  # the new code
label_data_orth = data_mag[li]
label_data_orth_std = data_mag_std[li]
else:
label_data_orth = (label_data * data_conj_scaled).imag
np.abs(label_data_orth, out=label_data_orth)
# protect against invalid value -- this will be zero
# after (log and) mean subtraction
label_data_orth[li] = 1.
if log:
label_data_orth *= label_data_orth
np.log(label_data_orth, out=label_data_orth)
label_data_orth -= np.mean(label_data_orth, axis=-1,
keepdims=True)
label_data_orth_std = np.linalg.norm(label_data_orth, axis=-1)
label_data_orth_std[label_data_orth_std == 0] = 1

# correlation is dot product divided by variances
corr[li] = np.sum(label_data_orth * data_mag_nomean, axis=1)
corr[li] /= data_mag_std
corr[li] /= label_data_orth_std
if orthogonalize:
# Make it symmetric (it isn't at this point)
if absolute:
corr = np.abs(corr)
corr = (corr.T + corr) / 2.
``````

What I’m curious about in the code above is the variable assigned to the corr[li] in the label data for statement.

``````corr[li] = np.sum(label_data_orth * data_mag_nomean, axis=1)
``````

For example, let’s say the shape of input data is (30,19,5000). (epoch, node, time)

Then, the epoch data variable has shape (19,5000).
and data_mag , data_mag_nomean, data_mag_std also has shape (19,5000).

``````for li, label_data in enumerate(epoch_data):
``````

And after this code,
label_data_orth is an orthogonal signal of 19 channels for label_data.

What I’m curious about here is
In this code, to calculate the Pearson correlation,
np.sum(label_data_orth * data_mag_nomean, axis=1) was used.

I wonder why data_mag_nomean is included instead of data_mag_nomean[li].
Shouldn’t corr[li] contain 19 orthologized signals of label_data and label_data?

If there’s anything I misunderstood, please tell me.

Thank you.

ping @adam2392 . The line in question is here:

``````corr[li] = np.sum(label_data_orth * data_mag_nomean, axis=1)
corr[li] /= data_mag_std
``````

I think this code should be modified as follows

``````corr[li] = np.sum(label_data_orth * data_mag_nomean[li], axis=1)
corr[li] /= data_mag_std[li]
``````

Pinging @larsoner I remember you contributing the orthogonalize portion of the code(?). Do you happen to know the right answer. I’m going to re-read this a few times to see if I can spot the issue…

@Junseok726 apologies, I didn’t work on the original code. I was just involved in refactoring. So from what I can tell, I tried your change, but it broke a unit test.

In mne-connectivity/test_envelope.py at main · mne-tools/mne-connectivity · GitHub, there seems to be a unit test that was copied over from testing against the output of FieldTrip.

The resulting error shows:

``````mne_connectivity/tests/test_envelope.py:85: in test_envelope_correlation
assert_allclose(
E   AssertionError:
E   Not equal to tolerance rtol=1e-07, atol=0
E
E   Mismatched elements: 6 / 10 (60%)
E   Max absolute difference: 0.6672455
E   Max relative difference: 8.65513432
E    x: array([0.      , 0.796435, 0.744338, 0.774606, 0.      , 0.70164 ,
E          0.734013, 0.      , 0.698528, 0.      ])
E    y: array([0.      , 0.195495, 0.077092, 0.252088, 0.      , 0.098989,
E          0.099146, 0.      , 0.073179, 0.      ])
``````

so it seems that if you apply your proposed fix, then there is an issue. If you end up figuring out the source of your confusion, I would love to know!

Seems like perhaps we can at the very least make some extra in-line comments, or documentation to the function to explain what’s happening.

Hello

Thank you for letting me know even though I didn’t know that I would verify it through the test code.

After reviewing it, I think I misunderstood.

I thought the label_data_orth variable contained the orthogonal vector of n_label channels for label_data vector.

However, we found that the label_data_orth variable contained an orthogonal vector for each channel in label_data.

Thank you for noticing my mistake.

1 Like

Hi @Junseok726 if you feel that there is a documentation to improve, a PR/issue documenting this would be welcome! 