Comparing two Evoked time courses

I have two Evoked time courses corresponding to two conditions, similar to the example presented here. I would like to find all time periods where these conditions differ significantly (e.g. using a confidence level of 0.95). The obvious approach is to find all periods where the two confidence intervals do not overlap. However, I was wondering how to account for multiple comparisons. Clearly, we’re performing one test per time point, so even though these tests are definitely not independent (there is a clear time dependency), it might probably be a good idea to still perform some kind of correction.

Is there a best practice to find significantly different time periods in two Evokeds? Is correction for multiple comparisons required/recommended?

In the past I’ve used permutation_cluster_1samp_test for this purpose (or permutation_cluster_test if it’s not a within-subject comparison). Here’s a helper function from a past paper where I did this for 1-D pupillometry data: 2018-pupil-lisdiff/pupil-stats.py at 2f8695df019b7a89b5d1c068fcc40f0e3054959f · LABSN-pubs/2018-pupil-lisdiff · GitHub

The clustering results can tell you which time spans are significant, which can then be used for shading the significant regions on the plot, as here: https://raw.githubusercontent.com/LABSN-pubs/2018-pupil-lisdiff/master/figures/manuscript/capd-pupil-deconv-attn-by-space-group.pdf

1 Like

Thanks @drammock, this looks great! I’ll try to adapt it to my data. Although this seems like it should be the solution, I’ll not mark it just yet to see if others have alternative suggestions (as I’m sure there is no single correct way). I’ve read suggestions that bootstrapping might also be an option, but I didn’t find any authoritative source…

I second the use of cluster-based permutation tests as a way to control for multiple comparisons. However, note that there is a caveat in how you can interpret the outcome of such tests. The FieldTrip wiki has a good FAQ on this that also lists some relevant papers:
https://www.fieldtriptoolbox.org/faq/how_not_to_interpret_results_from_a_cluster-based_permutation_test/

For completeness, there has been a debate whether it might be okay to interpret the “main cluster”. In my personal view, this is still not a perfectly clean argument, since the test was run to compare H0 and HA.

1 Like