Global field power

External Email - Use Caution

Thanks a lot Jos? and Phillip! I wasn?t aware of potential differences of implementation of the denominator across libraries.

To follow up: It?s clear that, if mean across channels = 0, then RMS and SD are equal. Can we assume that for MEG data?
For EEG, with average reference and ?picking? all channels, that?s the case. But if we consider taking GFP as a summary statistic for a subset of channels, e.g. calculating the activation across a cluster of channels that shows up as significant according to some cluster statistics, then there should be a difference between RMS and SD, right?
If you have any further thoughts on that issue, please let me know. Probably, the difference between RMS and SD doesn?t matter that much, in particular when taking the same measure throughout your analysis. I guess, it should be mentioned in a methods section though.

Thanks,
Christoph

External Email - Use Caution

I don't think I missed any more messages in the thread, but my apologies
if I did.....

        External Email - Use Caution

Thanks a lot Jos? and Phillip! I wasn?t aware of potential differences of implementation of the denominator across libraries.

To follow up: It?s clear that, if mean across channels = 0, then RMS and SD are equal. Can we assume that for MEG data?

I would say that in the absence of SQUID jumps, then probably.

For EEG, with average reference and ?picking? all channels, that?s the case.

Probably. Recall that the entire recording can have a DC offset, but
this is fairly easy to remove (and is usually removed at the channel
level because it's viewed as a measurement artifact and not an inherent
property of the underlying brain signal).

But if we consider taking GFP as a summary statistic for a subset of channels, e.g. calculating the activation across a cluster of channels that shows up as significant according to some cluster statistics, then there should be a difference between RMS and SD, right?

Probably. Almost definitely if something shows up as significant because
it's the usual null-hypothesis is differing from zero. But
(1) That's for the difference wave. Individual conditions may not differ
from zero.
(2) Appropriate de-meaning/removal of the DC offset could probably
change this. That said, I've heard some compelling arguments that we
should be very careful about demeaning at the trial level. Consider an
oddball paradigm: if you de-mean / take the entire trial as a baseline,
then you could introduce an artifactual difference for the non-P300
parts of the oddball trials compared to the standard trials.

If you have any further thoughts on that issue, please let me know. Probably, the difference between RMS and SD doesn?t matter that much, in particular when taking the same measure throughout your analysis. I guess, it should be mentioned in a methods section though.

Definitely! It's one of those little things that has a surprising amount
of subtlety in the details that doesn't matter ... until it _really_
matters.

Phillip

External Email - Use Caution

Hi Phillip,

I don't think I missed any more messages in the thread, but my apologies
if I did.....

No, you didn?t miss any more messages.

To follow up: It?s clear that, if mean across channels = 0, then RMS and SD are equal. Can we assume that for MEG data?

I would say that in the absence of SQUID jumps, then probably.

Good to know.

For EEG, with average reference and ?picking? all channels, that?s the case.

Probably. Recall that the entire recording can have a DC offset, but
this is fairly easy to remove (and is usually removed at the channel
level because it's viewed as a measurement artifact and not an inherent
property of the underlying brain signal).

Well, ?average references? means that mean across all channels is zero, per definition. Any DC offset is removed with average reference, isn?t it?

But if we consider taking GFP as a summary statistic for a subset of channels, e.g. calculating the activation across a cluster of channels that shows up as significant according to some cluster statistics, then there should be a difference between RMS and SD, right?

Probably. Almost definitely if something shows up as significant because
it's the usual null-hypothesis is differing from zero. But
(1) That's for the difference wave. Individual conditions may not differ
from zero.
(2) Appropriate de-meaning/removal of the DC offset could probably
change this. That said, I've heard some compelling arguments that we
should be very careful about demeaning at the trial level. Consider an
oddball paradigm: if you de-mean / take the entire trial as a baseline,
then you could introduce an artifactual difference for the non-P300
parts of the oddball trials compared to the standard trials.

Very good point!

If you have any further thoughts on that issue, please let me know. Probably, the difference between RMS and SD doesn?t matter that much, in particular when taking the same measure throughout your analysis. I guess, it should be mentioned in a methods section though.

Definitely! It's one of those little things that has a surprising amount
of subtlety in the details that doesn't matter ... until it _really_
matters.

Agreed.

Best,
Christoph

Phillip

Thanks,
Christoph

Send Mne_analysis mailing list submissions to
  mne_analysis at nmr.mgh.harvard.edu

To subscribe or unsubscribe via the World Wide Web, visit
  Mne_analysis Info Page
or, via email, send a message with subject or body 'help' to
  mne_analysis-request at nmr.mgh.harvard.edu

You can reach the person managing the list at
  mne_analysis-owner at nmr.mgh.harvard.edu

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Mne_analysis digest..."

Today's Topics:

1. Re: Global field power (Phillip Alday)

----------------------------------------------------------------------

Message: 1
Date: Thu, 23 Jul 2020 13:07:47 +0200
From: Phillip Alday <phillip.alday at mpi.nl>
Subject: Re: [Mne_analysis] Global field power
To: Discussion and support forum for the users of MNE Software
  <mne_analysis at nmr.mgh.harvard.edu>, Jos? C. Garc?a Alanis
  <alanis.jcg at gmail.com>
Message-ID: <da2ed1a9-3b26-cad9-13d5-8ea4dc770e78 at mpi.nl>
Content-Type: text/plain; charset="utf-8"

      External Email - Use Caution

RMS is indeed the same as SD *when* the mean is zero. But I'm not sure
that's always the case in EEG, depending on which `picks` you have and
what the reference is.

There is also one more bit of fine print: the denominator for RMS is
pretty clear, but the denominator for SD may have a degrees of freedom
correction. So even if the case of zero mean, computing the RMS by hand
vs. calling a library function for SD may yield different results
depending on the library defaults for df. If I recall correctly,
scipy.stats actually uses a different default than numpy....

On a related topic: the R function scale() has a big note on this in its
documentation, because it allows for centering (subtracting the mean)
and/or scaling (RMS) and the combination of these two flags creates
straight centering, RMS, SD, or the identity transform.

Phillip

???External Email - Use Caution???

Hey everybody, hey Christoph,

I believe, in this case the result of the standard deviation (SD) and
root mean square (RMS) approach should be roughly the same (if not the
same).
You are right, that the RMS computation makes no subtraction of the
mean across channels as it would be the case for the standard
deviation. However, if the mean is zero, then the difference of a
value to the mean it's just the value itself (the mean of the signal
evoked.data should be pretty close to zero). Thus, the results of the
calculations should be equivalent. But I'm open for discussion if this
assumption is wrong.

A quick snipped to test this assumption:

D = np.random.normal(0, 1, 1000)
D.std()
0.9586524583070871

np.sqrt((D * D).mean())
0.9586667427413401

Roughly the same. The results should vary if you assume a mean != 0.

Best,
Jos?

<christoph at huber-huber.at <mailto:christoph at huber-huber.at>>:

  ???External Email - Use Caution???

  Hi list,

  I recently came across that mne python uses 3 different formulas
  for calculating global field power (GFP). I?m wondering why.
  They are:

  -?The spatial standard deviation
  line 1492 of /mne/viz/utils.py
  gfp = evoked.data.std(axis=0)
  This is the original version as e.g. in Lehmann & Skrandies
  (1980)?dx.doi.org/10.1016/0013-4694(80)90419-8
  <http://dx.doi.org/10.1016/0013-4694(80)90419-8&gt;
  Note that the fieldtrip folks write about global field power ?The
  naming implies a?squared measure but this is not the case.? (see
  help text of the FT_GLOBALMEANFIELD function of the fieldtrip
  toolbox).

  - Root mean square
  line 2988 of /mne/viz/utils.py
  combine_dict['gfp'] = lambda data: np.sqrt((data ** 2).mean(axis=1))
  There is no subtraction of the mean across channels as would be
  the case for standard deviation.

  - Again, root mean square
  line 466 of /mne/viz/evoked.py
  this_gfp = np.sqrt((D * D).mean(axis=0))

  - Sum of squares
  line 131 of
  /examples/time_frequency/plot_time_frequency_global_field_power.py
  gfp = np.sum(average.data ** 2, axis=0)
  Here, we?re dealing with power values of a time-frequency
  decomposition, so that?s perhaps the reason for the missing mean
  and sqrt?

  The mne python glossary at /doc/glossary.rst describes GFP as ?the
  standard deviation of the sensor values at each time point?,
  consistent with Lehmann & Skrandies. That seems to be correct only
  for the first formula mentioned here.

  Any suggestions for the reasons of when to use which version and
  educated guesses of whether these differences matter in practice
  are highly welcome.

  Thank you very much,
  Christoph
  _______________________________________________
  Mne_analysis mailing list
  Mne_analysis at nmr.mgh.harvard.edu
  <mailto:Mne_analysis at nmr.mgh.harvard.edu>
  Mne_analysis Info Page

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20200723/f11c1bbb/attachment.html

------------------------------

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

End of Mne_analysis Digest, Vol 150, Issue 38
*********************************************

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu <mailto:Mne_analysis at nmr.mgh.harvard.edu>
Mne_analysis Info Page

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20200813/063bb596/attachment-0001.html

External Email - Use Caution

<snip>

For EEG, with average reference and ?picking? all channels, that?s
the case.?

Probably. Recall that the entire recording can have a DC offset, but
this is fairly easy to remove (and is usually removed at the channel
level because it's viewed as a measurement artifact and not an inherent
property of the underlying brain signal).

Well, ?average references? means that mean across all channels is
zero, per definition. Any DC offset is removed with average reference,
isn?t it?

Ah, yes, you're absolutely right, but to emphasize the point for anybody
not reading the whole discussion: this is true as long as the same
channels were used for computing the average reference and the SD/GFP/RMS.

Phillip

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20200813/8d96f5ea/attachment-0001.html