Baseline calculation

Hi all,

I am currently in the process of performing preprocessing on my MEG/EEG data and am trying to correctly calculate a baseline. I am working with whole sentences that have several triggers within each sentence to mark particular areas of interest (for example: the noun, the verb). The current way that we calculate the baseline is from 100 ms before each of the triggers. Is there any way that I can calculate the baseline from the first trigger of each of the sentences (which happens to correspond to the beginning of the sentence) and apply that to each of the triggers within that particular sentence instead of having to calculate it 100 ms before each of the triggers? I am asking this because as of right now we are basically forced to calculate the baseline while the subject is hearing the sentences, which can bias the data.

Thanks in advance,

Reid

Reid Vancelette
Research Assistant to
Dr. David Caplan M.D., Ph.D.

Massachusetts General Hospital
Neuropsychology Lab
175 Cambridge Street, Suite 340
Boston, MA 02114

Phone: (617) 724-8846
Email: rvancelette at partners.org<mailto:rvancelette at partners.org>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20120329/3912d5ea/attachment.html

Hi Vancelette,

Is the time between the beginning of a sentence and a given trigger always
the same across items? (That should be the case if the number of words is
always the same and you used serial visual presentation; if the position of
a given trigger varies across items and/or you used auditory presentation,
though, then the latencies probably vary.) If this latency is always the
same then you can just add that latency to what the baseline interval would
be for each trigger. For instance, if the baseline for the first trigger
were -100 to 0, and the second trigger always appeared 800 ms after the
first, then the baseline for the second could be -900 to -800.

Best,
Steve

Hi Steven and everyone else,

We used auditory presentation for our study. This means that the latencies vary depending on the word and the sentence. We unfortunately didn?t normalize all of the lengths of all the segments so that the matched. Is there any way to go around this problem? I know the lengths of each of the segments in all of the sentences: this is how we initially created the triggers for our experiment.

-Reid

Reid,

I see. I imagine it would be possible to do baseline-correction in MATLAB
since you have a list of the latencies of each trigger (I personally don't
have experience working with the MNE MATLAB toolbox, but once you've gotten
the data into MATLAB then I imagine doing the baseline-correction should be
straightforward if you're familiar with MATLAB itself: just iterating
through each trial, selecting the proper latency from that trial from your
list, and calculating and applying a baseline-correction based on the
specific latency for that trial, then saving the data and exporting back
out to .fif). I'm not aware of a way to do this with MNE functions,
although someone else on the list might be.

Best,
Steve

I agree, something along those lines is probably easiest.

For future experiments, there are better options for designing your
triggers, which will make these sorts of calculations simpler.

D

Stephen Politzer-Ahles wrote: