TemporalFilter Pipeline transition bandwidth

There seems to be a difference in processing between mne.filter.filter_data and mne.decoding.TemporalFilter. The transform method of the TemporalFilter object seems to be applying the transition bandwidths twice when filtering the data resulting in larger transition bandwidth when using TemproalFilter vs filter_data.

The transition bandwidths are modified first here on this line, then again when passed through filter_data shortly thereafter.

Is this a bug or is this expected behavior? When using the same h_freq value for both methods the TemporalFilter object raises an error indicating that my transition bandwidth is now higher than the nyquist frequency. If this is expected and standard signal processing, why is it so and why does it seem like these methods behave differently?

Version Info

  • MNE version: 1.0.3
  • operating system: macOS 12.4
  • Python version: 3.9.12

Minimal working example

# Imports
import mne
import numpy as np
from sklearn.pipeline import Pipeline
# data
paths = mne.datasets.eegbci.load_data(subject=1, runs=1)
path = next(iter(paths))
raw = mne.io.read_raw_edf(path, preload=True)
# Epochs & Events (3 second event lengths)
ts = np.arange(0, 20) * sfreq * 3
labels = [0, 1] * 10
event_table = np.c_[ts, [0]*20, labels].astype(np.int64)
epochs = mne.Epochs(raw, events=event_table, tmin=0, tmax=3, baseline=None)
x = epochs.get_data()
y = labels
# Pipeline
sfreq = raw.info["sfreq"]
pipeline = Pipeline([
    ("filt", mne.decoding.TemporalFilter(l_freq=8, h_freq=60, sfreq=sfreq))
])
# Example: Pipeline
bad_x = pipeline.fit_transform(x, y)
# Example: filter data
good_x = mne.filter.filter_data(x, l_freq=8, h_freq=60, sfreq=sfreq)

Log when running through pipeline

FIR filter parameters
---------------------
Designing a one-pass, zero-phase, non-causal bandpass filter:
- Windowed time-domain design (firwin) method
- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
- Lower passband edge: 8.00
- Lower transition bandwidth: 6.00 Hz (-6 dB cutoff frequency: 5.00 Hz)
- Upper passband edge: 60.00 Hz
- Upper transition bandwidth: 75.00 Hz (-6 dB cutoff frequency: 97.50 Hz)

60 + 75 = 135 > Nyquist Frequency

Log when running through filter_data

Setting up band-pass filter from 8 - 60 Hz

FIR filter parameters
---------------------
Designing a one-pass, zero-phase, non-causal bandpass filter:
- Windowed time-domain design (firwin) method
- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
- Lower passband edge: 8.00
- Lower transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 7.00 Hz)
- Upper passband edge: 60.00 Hz
- Upper transition bandwidth: 15.00 Hz (-6 dB cutoff frequency: 67.50 Hz)
- Filter length: 265 samples (1.656 sec)

Notice the difference in the transition bandwidths.

I can replicate the discrepancy. You can have a look at https://github.com/mne-tools/mne-python/blob/main/mne/decoding/transformer.py#L829
and see why this happens.

I agree that the 2 behaviors need to be consistent in their defaults

Alex

Thanks Alex,

Should this be a bug report then? I can create an issue on GitHub but I wanted to check here first if perhaps my understanding on how these two methods should behave is incorrect.

yes please do open a GitHub issue about this, and include a cross-ref back to this forum page.

tagging @larsoner as this deals with filtering

:+1:

Issue is opened here

1 Like