Baseline correcting pre-stimulus segments for covariance estimation

Hi All,

I have a set of MEG data collected in a sentence processing paradigm, where
the critical words occur 6-7 words into the sentence. I'd like to look at
source-level evoked responses to these words via minimum-norm estimates,
without applying baseline correction.

In this scenario, should I still apply baseline correction to the
pre-stimulus intervals that I use to estimate the noise covariance? Note
that in this design, pre-stimulus is actually pre-sentence, meaning that
there is about 4 seconds of data between these windows and the onset of the
epochs that will be inverted to source space.

In attempt to address this question, I've plotted whitened evoked responses
from the start of the sentence to the target words using different methods
of covariance estimation, with and without baseline correction applied to
the 100 ms windows from which I estimated the covariance. I've attached an
example from one subject, and the pattern shown there is consistent across
quite a few subjects in the sample.

In general, it looks like if I apply baseline correction to the window from
which I estimate covariance, the global field power of the whitened
response never reaches 1, even in the window in which the covariance was
estimated. In contrast, the GFP in the whitened response without baseline
correction looks more like what I'd expect to see. This pattern seems
unusual to me, but does it imply that I should not be be applying baseline
correction here? Or are there other factors that should be considered?

Thanks!

Graham

Here is a sample of the code used to generate the whitened responses for
the empirical estimator with/without baseline correction:

raw = mne.io.read_raw_fif(fname_raw, preload=True)
events = mne.read_events(fname_event)
picks = mne.pick_types(raw.info, meg=True, eeg=False,
eog=False,exclude=bads)
epochstargetFull = mne.Epochs(raw, events, event_id = event_id,
tmin=-4.4,tmax=1.2, decim=5,reject=dict(mag=2e-12)
,baseline=None,picks=picks,on_missing='ignore')
evokedtargetFull = epochstargetFull.average()

method = 'empirical'

# covariance with baseline correction applied
epochscov = mne.Epochs(raw, events, event_id=event_id, tmin=-4.4,
tmax=-4.3, decim=5, reject=dict(mag=2e-12), baseline=(-4.4,-4.3),
picks=picks, on_missing='ignore')
cov = mne.compute_covariance(epochscov, tmin=-4.4, tmax=-4.3,
method=method')
tmp = evokedtargetFull.plot_white(cov, show=False)
tmp.savefig('topright_empirical_Baselined.png')
del(epochscov)
del(cov)
del(tmp)

# covariance without baseline correction applied
epochscov = mne.Epochs(raw, events, event_id=event_AF, tmin=-4.4,
tmax=-4.3, decim=5, reject=dict(mag=2e-12), baseline=None, picks=picks,
on_missing='ignore')
cov = mne.compute_covariance(epochscov, tmin=-4.4, tmax=-4.3, method=method)
tmp = evokedtargetFull.plot_white(cov, show=False)
tmp.savefig('topleft_empirical_NoBaseline.png')
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20170510/a1b6e6b8/attachment-0001.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: CovarianceComparison.png
Type: image/png
Size: 427911 bytes
Desc: not available
Url : http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20170510/a1b6e6b8/attachment-0001.png

It would be nice to actually see the baseline.
The question is whether it is roughly zero mean or not. For the covariance
noise model to be appropriate the data should be zero mean, as is typically
roughly the case after baseline correction or filtering with a high pass.
What is plausible here also depends to on your data. Using the noise
covariance here will relate your data to the amplitude structure of what
you declare noise. Plot this for example using an empty room noise cov.
Overall, from the distance your plots look plausible to me.

Hi Denis,

Good point. Here are two images showing the initial segments of the evoked
responses so that you can see the baseline. They show with and without
baseline correction applied, where the baseline window is -4400 to -4300
ms. This doesn't suggest to me that there's any huge deviation from zero
mean without the baseline correction.

In this case, do you think the baseline corrected data show a large enough
improvement for it to be applied... considering the change in the whitened
responses? Or is it safer to just apply the baseline correction to better
ensure that this is zero mean?

It would be nice to actually see the baseline.
The question is whether it is roughly zero mean or not. For the covariance
noise model to be appropriate the data should be zero mean, as is typically
roughly the case after baseline correction or filtering with a high pass.
What is plausible here also depends to on your data. Using the noise
covariance here will relate your data to the amplitude structure of what
you declare noise. Plot this for example using an empty room noise cov.
Overall, from the distance your plots look plausible to me.

Hi All,

I have a set of MEG data collected in a sentence processing paradigm,
where the critical words occur 6-7 words into the sentence. I'd like to
look at source-level evoked responses to these words via minimum-norm
estimates, without applying baseline correction.

In this scenario, should I still apply baseline correction to the
pre-stimulus intervals that I use to estimate the noise covariance? Note
that in this design, pre-stimulus is actually pre-sentence, meaning that
there is about 4 seconds of data between these windows and the onset of the
epochs that will be inverted to source space.

In attempt to address this question, I've plotted whitened evoked
responses from the start of the sentence to the target words using
different methods of covariance estimation, with and without baseline
correction applied to the 100 ms windows from which I estimated the
covariance. I've attached an example from one subject, and the pattern
shown there is consistent across quite a few subjects in the sample.

In general, it looks like if I apply baseline correction to the window
from which I estimate covariance, the global field power of the whitened
response never reaches 1, even in the window in which the covariance was
estimated. In contrast, the GFP in the whitened response without baseline
correction looks more like what I'd expect to see. This pattern seems
unusual to me, but does it imply that I should not be be applying baseline
correction here? Or are there other factors that should be considered?

Thanks!

Graham

Here is a sample of the code used to generate the whitened responses for
the empirical estimator with/without baseline correction:

raw = mne.io.read_raw_fif(fname_raw, preload=True)
events = mne.read_events(fname_event)
picks = mne.pick_types(raw.info, meg=True, eeg=False,
eog=False,exclude=bads)
epochstargetFull = mne.Epochs(raw, events, event_id = event_id,
tmin=-4.4,tmax=1.2, decim=5,reject=dict(mag=2e-12)
,baseline=None,picks=picks,on_missing='ignore')
evokedtargetFull = epochstargetFull.average()

method = 'empirical'

# covariance with baseline correction applied
epochscov = mne.Epochs(raw, events, event_id=event_id, tmin=-4.4,
tmax=-4.3, decim=5, reject=dict(mag=2e-12), baseline=(-4.4,-4.3),
picks=picks, on_missing='ignore')
cov = mne.compute_covariance(epochscov, tmin=-4.4, tmax=-4.3,
method=method')
tmp = evokedtargetFull.plot_white(cov, show=False)
tmp.savefig('topright_empirical_Baselined.png')
del(epochscov)
del(cov)
del(tmp)

# covariance without baseline correction applied
epochscov = mne.Epochs(raw, events, event_id=event_AF, tmin=-4.4,
tmax=-4.3, decim=5, reject=dict(mag=2e-12), baseline=None, picks=picks,
on_missing='ignore')
cov = mne.compute_covariance(epochscov, tmin=-4.4, tmax=-4.3,
method=method)
tmp = evokedtargetFull.plot_white(cov, show=False)
tmp.savefig('topleft_empirical_NoBaseline.png')

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

The information in this e-mail is intended only for the person to whom it
is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you
in error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

The information in this e-mail is intended only for the person to whom it
is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you in
error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20170510/57803a24/attachment-0001.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: GrandAverage_noBaselineCor.png
Type: image/png
Size: 92446 bytes
Desc: not available
Url : http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20170510/57803a24/attachment-0002.png
-------------- next part --------------
A non-text attachment was scrubbed...
Name: GrandAverage_withBaselineCor.png
Type: image/png
Size: 93065 bytes
Desc: not available
Url : http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20170510/57803a24/attachment-0003.png