Claims by A Delorme about MNE filters

Honestly I wasn’t sure where to put this, but felt like it’s probably not a github issue …

Arnaud Delorme put out a paper on EEG preprocessing that includes a paragraph about how MNE-Python filters are worse than alternatives:

Not all filters are created equal. Most software packages have various options and parameters to design filters, and it was impractical to test them all. We tested the default filter in the publicly available software packages EEGLAB, MNE, Brainstorm, and FieldTrip (see “Methods” section and Supplementary Figure 2) and compared them to the reference filter in Supplementary Figure 2. There ERPLAB reference filter performed better than all other filters for the Oddball dataset (p < 0.0001) but not for the Face and Go/No-go dataset. For the Go/No-go dataset, the MNE filter performed significantly worse than the ERPLAB reference filter (p < 0.0001), the Brainstorm filter (p < 0.005), and the EEGLAB filter (trend at p = 0.02). The MNE filter performed worse for the Face dataset than the EEGLAB filter (p < 0.002).

Now I have to admit I’m not convinced by many of the claims in this paper, including those about filtering, but I thought it might still be worth a look to compare MNE filters to those in say EEGLAB.

(The ERPLAB folks argued aggressively against what Arnaud suggests high pass filter wise here anyways.)

Interesting. I don’t have time to read the paper now, but I thought we mimicked the default filter settings in EEGLAB (at least for FIR filters). Can you be more specific on how and where ERPLAB folks argued against which suggestions?