Hello mne-python community,
I have a problem related to the computation of the inverse problem resolution using the LCMV method. Here are the details of my situation:
Problem: results of source inversion using the LCMV method are really far from ground truth data (simulations).
- Head model is the fsaverage template and ico3 subsampling of the source space, with a fixed orientation (and default BEM model proposed in mne-python).
- “raw” data, one trial of 256 time points, sampling frequency = 256 Hz (duration = 500ms).
- between [25; 98] time points of active signal and the rest of noise signal (158 to 231 samples).
- The source spatial pattern is a random area of neighbor sources activated with the same activation signal (a gaussian in time), of which the amplitude is modulated by a gaussian in space (meaning sources far away from a seed source have a smallest amplitude than sources closed to this seed source).
- Noise on the EEG data to a SNR of 20dB (white nosie).
I will upload an image of an example of butterfly plot of my obtained EEG data.
Then I want to compare the results of different inverse solution algorithms. sLORETA and LCMV for example.
For that I compute noise and data covariance:
→ noise covariance computation seems to be ok even if I use less than the number of samples required (RuntimeWarning: Too few samples (required : 310 got : 233), covariance estimate may be unreliable). The rank is 60 and data whitening using the covariance matrix is ok.
→ data covariance matrix has a low rank (24 in the example), and computed with only between 20 and 90 points of the same trial of data (so points are not independent…). I get this error: RuntimeWarning: Too few samples (required : 310 got : 23), covariance estimate may be unreliable
Then applying the source localization algorithms (code bellow), sLORETA example seems plausible w.r.t my ground truth, but the result of LCMV algorithm is really far from my ground truth data.
So my questions are:
- Is it possible that the results are LCMV are poor due to a poor estimation of data covariance matrix?
- Is there a way I can have a better covariance matrix or am I doomed to these results with that kind of data (1 trial, not many points of active signal)?
- Does working on a single trial of data makes any sens in real life applications?
I hope I am clear enough, thank you for your help
Bellow are the main lines of my processing code:
- MNE version: 1.2.1
- operating system: Ubuntu 20.04.5
""" - eeg and src are my raw numpy array of simulated data (of respective dimensions [61, 256] and [1284, 256]) - fwd is the ForwardModel mne-python object also used for data simulation - mne_info is the adapted info object for my eeg data """ raw_eeg = mne.io.RawArray( eeg, mne_info, verbose=None) raw_eeg.set_eeg_reference(verbose=False) e_max, t_max = evoked_eeg.get_peak(time_as_index=True) e_max = int(e_max) amp_max = evoked_eeg.get_data()[e_max, t_max] all_points = np.arange( 0, eeg.shape, 1) active_points = np.where( evoked_eeg.get_data()[e_max, :] > 0.05*amp_max ).squeeze() noise_points = np.delete(all_points, active_points) n_active_points = len(active_points) data_signal_raw = mne.io.RawArray( eeg[:, active_points], mne_info, verbose=None ) data_signal_raw.set_eeg_reference(verbose=False) # gives the warning: RuntimeWarning: Too few samples (required : 310 got : 23), covariance estimate may be unreliable noise_signal_raw = mne.io.RawArray( eeg[:, noise_points], mne_info, verbose=None ) # gives the warning: RuntimeWarning: Too few samples (required : 310 got : 233), covariance estimate may be unreliable noise_signal_raw.set_eeg_reference(verbose=False) # noise and data covariance computation: noise_cov = mne.compute_raw_covariance( noise_signal_raw, tmin = 0, tmax=None, tstep = 1/fs, method='auto', rank=None ) data_cov = mne.compute_raw_covariance( data_signal_raw, tmin = 0, tmax=None, tstep = 1/fs, method='auto', rank=None ) # inverse solution computations method = "sLORETA" snr = 20. lambda2 = 1. / snr ** 2 raw_eeg.set_eeg_reference(projection=True, verbose=False) inv_op = mne.minimum_norm.make_inverse_operator( raw_eeg.info, fwd, noise_cov, loose=0, depth=None) stc_slo = mne.minimum_norm.apply_inverse_raw(raw_eeg, inv_op, lambda2, method=method, pick_ori=None, verbose=True) # LCMV filters = mne.beamformer.make_lcmv(raw_eeg.info, fwd, data_cov, reg=0.05, noise_cov=None, reduce_rank=False) #noise_cov=noise_cov stc_lcmv = mne.beamformer.apply_lcmv_raw(raw_eeg, filters)
P.S: I know my data is very unrealistic and simple.