Global field power

External Email - Use Caution

Hi list,

I recently came across that mne python uses 3 different formulas for calculating global field power (GFP). I?m wondering why.
They are:

- The spatial standard deviation
line 1492 of /mne/viz/utils.py
gfp = evoked.data.std(axis=0)
This is the original version as e.g. in Lehmann & Skrandies (1980) dx.doi.org/10.1016/0013-4694(80)90419-8 <http://dx.doi.org/10.1016/0013-4694(80)90419-8>
Note that the fieldtrip folks write about global field power ?The naming implies a squared measure but this is not the case.? (see help text of the FT_GLOBALMEANFIELD function of the fieldtrip toolbox).

- Root mean square
line 2988 of /mne/viz/utils.py
combine_dict['gfp'] = lambda data: np.sqrt((data ** 2).mean(axis=1))
There is no subtraction of the mean across channels as would be the case for standard deviation.

- Again, root mean square
line 466 of /mne/viz/evoked.py
this_gfp = np.sqrt((D * D).mean(axis=0))

- Sum of squares
line 131 of /examples/time_frequency/plot_time_frequency_global_field_power.py
gfp = np.sum(average.data ** 2, axis=0)
Here, we?re dealing with power values of a time-frequency decomposition, so that?s perhaps the reason for the missing mean and sqrt?

The mne python glossary at /doc/glossary.rst describes GFP as ?the standard deviation of the sensor values at each time point?, consistent with Lehmann & Skrandies. That seems to be correct only for the first formula mentioned here.

Any suggestions for the reasons of when to use which version and educated guesses of whether these differences matter in practice are highly welcome.

Thank you very much,
Christoph
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20200723/3bf59d8a/attachment.html

External Email - Use Caution

Hey everybody, hey Christoph,

I believe, in this case the result of the standard deviation (SD) and root
mean square (RMS) approach should be roughly the same (if not the same).
You are right, that the RMS computation makes no subtraction of the mean
across channels as it would be the case for the standard deviation.
However, if the mean is zero, then the difference of a value to the mean
it's just the value itself (the mean of the signal evoked.data should be
pretty close to zero). Thus, the results of the calculations should be
equivalent. But I'm open for discussion if this assumption is wrong.

A quick snipped to test this assumption:

D = np.random.normal(0, 1, 1000)
D.std()
0.9586524583070871

np.sqrt((D * D).mean())
0.9586667427413401

Roughly the same. The results should vary if you assume a mean != 0.

Best,
Jos?

External Email - Use Caution

RMS is indeed the same as SD *when* the mean is zero. But I'm not sure
that's always the case in EEG, depending on which `picks` you have and
what the reference is.

There is also one more bit of fine print: the denominator for RMS is
pretty clear, but the denominator for SD may have a degrees of freedom
correction. So even if the case of zero mean, computing the RMS by hand
vs. calling a library function for SD may yield different results
depending on the library defaults for df. If I recall correctly,
scipy.stats actually uses a different default than numpy....

On a related topic: the R function scale() has a big note on this in its
documentation, because it allows for centering (subtracting the mean)
and/or scaling (RMS) and the combination of these two flags creates
straight centering, RMS, SD, or the identity transform.

Phillip