We would like to share our experience and hear your opinion about an artifact that appears after applying Maxfilter or MNE’s maxwell_filter on data collected with our MEGIN MEG system at the Paris Brain Institute, where we record our data without high-pass filter.
This artifact appears when the raw data contains particularly strong slow frequency motion-like artifacts (in the approximate range of 0.2 - 10 Hz), of the type that we get when the subject has metallic parts on their face. (note that these are typically the kind that would make us exclude the participant if they were not particularly rare patients)
This artifact is characterized by a very strong increase in broadband noise level at the sensors most strongly affected by the low frequency artifacts.
Can you explain this? is there any way to circumvent the problem? Are you facing this too?
Many thanks,
Max for the MEEG team of the Paris Brain Institute
Hmmm… st_duration=raw.times[-1] is quite unusual assuming your file has a long duration. I’ll try with a more typical st_duration=10 and see if it has the same problem.
Thanks. It’s a long-standing discussion in our group. I was told that early tests with -st 10 in the Neuromag maxfilter command line tool inserted jumps in the data and so we always used an arbitrary large value like 2000, which we changed to raw.times[-1] as we shift to MNE…
Glad to read your view on that if you have anything to share. Thanks!
I was told that early tests with -st 10 in the Neuromag maxfilter command line tool inserted jumps in the data
Do you have any data you could share that show this problem? I’ve always suspected it’s an issue and have a method that fixes it (overlap-add processing) but haven’t published it (and thus haven’t implemented it in MNE) mostly on account of not having good cases to showcase the issue…
Argh there was a bug in my last script that prevented showing the st_only=True case where you can see some broadband stuff in both the st_duration=raw.times[-1] as well as st_duration=10., so that’s not the culprit. Were you talking about the st_only=True case? Because for st_only=False (default) I don’t see a huge difference at least:
Have you tried computing SSP vectors on data that is epoched around the artifacts? I’ve had success in eliminating BCG-type artifacts using this procedure. They key is to make sure to pick “prototypical” artifacts aligned in time.
We’re looking if we find some data to illustrate the case.
In the meantime, I realized the following:
We typically record our data without any high pass filtering (migrating from a CTF system back in 2010, our local engineers changed that default setting, which seemed unnecessary to them).
An artifact consisting of step-like deviations on all (?) channels was observed regularly (we did some email archeology today and found a discussion between Samu Taulu and our local staff on the topic. Unfortunately no pictures or data so far…). The conclusion was, quoting Samu Taulu:
One more thing about the tSSS window lenght: Ultimately, it might be best to use just a single tSSS window the length of which would be the length of the whole recording. This would remove the problem of discontinuities.
The local practice of using st 2000 has been established then.
Do you think it is possible that the artifact I mentioned at the beginning of this thread was due to the conjunction of the two factors: recording very strong low frequency amplitudes, and using a whole-data buffer length?