Strange Artifact with Neuromag MEG data

Hi all,

I came across a weird artifact in my MEG data (Neuromag Vectorview). Not sure how to describe it or what is going on. So, here a screenshot of what it looks like:

The data is unprocessed at that stage. The artifact is only sometimes present in the file, and also only over some sensors. There is no error or warning that something is fishy. The raw.info looks also fine (superficially). Running some preprocessing (filter + sss) seems to also work ok (no issues in the log file), however, when trying to save the raw file afterwards, I get following error message:

---------------------------------------------------------------------------
ValueError                Traceback (most recent call last)
Input In [4], in <cell line: 1>()
----> 1 raw_proc.save(deriv_stub % 'filt-raw_sss.fif', overwrite=True, fmt='single', split_naming='bids')` `File <decorator-gen-213>:12, in save(self, fname, picks, tmin, tmax, buffer_size_sec, drop_small_buffer, proj, fmt, overwrite, split_size, split_naming, verbose)` `File /gpfs/project/projects/bpsydm/tools/pyEnvs/meg/lib/python3.9/site-packages/mne/io/base.py:1489, in BaseRaw.save(self, fname, picks, tmin, tmax, buffer_size_sec, drop_small_buffer, proj, fmt, overwrite, split_size, split_naming, verbose)
  1487 _validate_type(split_naming, str, 'split_naming')
  1488 _check_option('split_naming', split_naming, ('neuromag', 'bids'))
-> 1489 _write_raw(fname, self, info, picks, fmt, data_type, reset_range,
  1490      start, stop, buffer_size, projector, drop_small_buffer,
  1491      split_size, split_naming, 0, None, overwrite)
File /gpfs/project/projects/bpsydm/tools/pyEnvs/meg/lib/python3.9/site-packages/mne/io/base.py:2224, in _write_raw(fname, raw, info, picks, fmt, data_type, reset_range, start, stop, buffer_size, projector, drop_small_buffer, split_size, split_naming, part_idx, prev_fname, overwrite)
  2222 picks = _picks_to_idx(info, picks, 'all', ())
  2223 with start_and_end_file(use_fname) as fid:
-> 2224   cals = _start_writing_raw(fid, info, picks, data_type,
  2225                reset_range, raw.annotations)
  2226   with ctx:
  2227     final_fname = _write_raw_fid(
  2228       raw, info, picks, fid, cals, part_idx, start, stop,
  2229       buffer_size, prev_fname, split_size, use_fname,
  (...)
  2232       overwrite=True # we've started writing already above
  2233     )
File /gpfs/project/projects/bpsydm/tools/pyEnvs/meg/lib/python3.9/site-packages/mne/io/base.py:2427, in _start_writing_raw(fid, info, sel, data_type, reset_range, annotations)
  2424     info['chs'][k]['range'] = 1.0
  2425   cals.append(info['chs'][k]['cal'] * info['chs'][k]['range'])
-> 2427 write_meas_info(fid, info, data_type=data_type, reset_range=reset_range)
  2429 #
  2430 # Annotations
  2431 #
  2432 if len(annotations) > 0: # don't save empty annot
File /gpfs/project/projects/bpsydm/tools/pyEnvs/meg/lib/python3.9/site-packages/mne/io/meas_info.py:2146, in write_meas_info(fid, info, data_type, reset_range)
  2144   write_int(fid, FIFF.FIFF_SUBJ_HAND, si['hand'])
  2145 if si.get('weight') is not None:
-> 2146   write_float(fid, FIFF.FIFF_SUBJ_WEIGHT, si['weight'])
  2147 if si.get('height') is not None:
  2148   write_float(fid, FIFF.FIFF_SUBJ_HEIGHT, si['height'])
File /gpfs/project/projects/bpsydm/tools/pyEnvs/meg/lib/python3.9/site-packages/mne/io/write.py:96, in write_float(fid, kind, data)
   94 """Write a single-precision floating point tag to a fif file."""
   95 data_size = 4
---> 96 data = np.array(data, dtype='>f4').T
   97 _write(fid, data, kind, data_size, FIFF.FIFFT_FLOAT, '>f4')
ValueError: could not convert string to float: 'n/a'
  • MNE version: 1.0.2

Has anyone any idea what might be going on here?

Thanks,
Eduard

Hello,

I don’t know about the artifact, but the error during writing is apparently due to an invalid value in raw.info['subject_info']['weight'], which is currently set to 'n/a', but must be a float (or, I suppose, None if not available):

The weight key is not described in our documentation, though:
https://mne.tools/dev/generated/mne.Info.html

So this should probably be added and clarified…

Best wishes,
Richard

Edit I believe if you don’t have a subject weight, you should simply delete the respective key from info['subject_info'].

1 Like

Hi Eduard,
We also work with Neuromag MEG data, and we have seen a similar kind of issue with data processed through maxfilter to do movement compensation. If there are time gaps without accurate cHPI (continuous head position indicator) signal, and you’ve chosen the -movecomp without “inter” option, the data can be written as a “null” character instead of a numerical value using the previous known head position. We’ve seen many kinds of MEG processing software handle this differently - some generating weird artifacts like what you see.

However, you mention that your data is unprocessed. So that may not be the case! If your data has had maxmove applied, you could quickly rule out this issue by reviewing the head position data, or the maxfilter logs. It just came to mind as it’s visually very similar.

Good luck,
Megan

2 Likes

Hi Megan,

Thanks for your input! Indeed the data are really unprocessed, we have not even recorded cHPI, so that is probably not the reason for this artifact. But it sounds possible that running tSSS (without movement compensation), could cause the issue that leads to the error when trying to save the file, in line with @richard’s comment.

Thanks to both of you. I will try to find out what caused issue with that recording, or whether there are ways to fix/ignore it. (That being said, anyone should feel free to pitch ideas :slight_smile: )

Eduard

Are you saying that a read-write roundtrip works if you don’t process the data; but once you apply tSSS, saving fails because of the invalid value in info?

Could you confirm that you did not, in fact, manually touch info?

Thanks!

Could you confirm that you did not, in fact, manually touch info?

Yes, that I can do. Definitely haven’t touch the info.

Are you saying that a read-write roundtrip works if you don’t process the data; but once you apply tSSS, saving fails because of the invalid value in info?

That I thought, because of the logfile of my preprocessing script. Everything seemed to have worked fine, but once the solution of the tSSS was to be saved, the script crashed with that error message. However, at the moment I am failing to reproduce it.
I just tried following (reproducing the pipeline):

  1. read/write the raw file → works
  2. read the raw file, run tSSS, save the file → works
  3. read the raw file, high-pass filter, run tSSS, save the file → works

The only thing that is missing now from the pipeline would be ZAPline, but that I can’t do locally, as it is quite memory-hungry. If that’s not it, I will try to run the pipeline interactively. Will try that tomorrow on our hpc.

Another thing that I noticed when plotting the PSD of the data, is that the frequency range is larger than usually. At a sampling rate of 1000 Hz, our default settings for the online filter is 330Hz low pass. But the psd goes up to 500 Hz for this recording. So I was wondering whether this could be related to aliasing? Applying an offline filter didn’t solve it though…

i’ll keep you posted regarding the saving error!

Thanks,
eduard

Mini update, on that issue.

The saving issue is not due to tSSS or any other noise in the data. Instead it seems to be related to the BIDS-conversion of the data. The height field in subject_info that raw.save is complaining about, originates in the participants.tsv that mne-bids read out during the conversion and put into the raw.info. Could it be that raw.save can’t handle custom fields in the subject_info field?

1 Like

Thanks, that is obviously a bug in MNE-BIDS then, storing invalid data in info upon reading. Can you create an issue on the MNE-BIDS GitHub repository?

Great, the read/write problem was fixed here (reason being participants.tsv containing None’s). Thanks!

The issue with the artifacts remains though (it is unrelated to the bug). So if anyone has some ideas, I’d be happy to here!

Eduard

1 Like

in your original image, I count 20 y-axis labels (channels) and 20 normal-looking traces, the straight-line rays seem to be in addition to the normal traces. Can you try plotting this file after running mne.set_browser_backend('matplotlib')? That might narrow down whether the problem is in the file, or in the plotting code.

1 Like

You’re absolutely right @drammock.

Plotting with the matplotlib backend looks fine.

So, it seems to be a plotting artifact, and the data should be fine, right?

Do you need the file to find what is causing the issues with the plotting?

Thanks,
Eduard

1 Like

@marsipu @larsoner There seems to be some issue with the qt-browser :face_with_spiral_eyes:

Yes, having the file would help.

Can you open an issue on GitHub for the mne-qt-browser repository? Sign in to GitHub · GitHub

1 Like

Since there seem to be always pairs of lines, my first suspicion would be that there are somehow faulty x/y (s/fT) pairs in the time/data-arrays like NaN/NaN which the qt-backend doesn’t handle well

1 Like

The issue is now being tracked at [BUG] additional lines drawn when plotting raw data · Issue #136 · mne-tools/mne-qt-browser · GitHub

For the record, the plotting problem was present only for mne 0.24. Updating to 1.x solved the issue.

1 Like