- MNE version: e.g. 0.24.1
- operating system: Windows 10
Hey all,
First time poster here, sorry if my question has already been dealt with, I had a search but didn’t see anything.
I am essentially trying to run a linear regression on a series of source spaces calculated using EEG and the template MRI. The problem I am having is the LR is resulting in every voxel and every time point is significant. This is both with data that should have some correlations, and data that should have no correlations. Here is what I had done:
I have a lot of epochs, so I had recreated the BEM and STC to reduce the number of ‘voxels’:
subject = 'fsaverage'
trans = 'fsaverage'
bem = mne.make_bem_model(subject = subject, ico=4)
src = mne.setup_source_space(subject, spacing='ico4', surface='white', subjects_dir=subjects_dir, add_dist=True, n_jobs=1, verbose=None)
bem = mne.make_bem_solution(bem)
I then imported the data from FIFF (it has already been cleaned etc in Fieldtrip, and imported over), and ran through the basic forward and inverse modelling:
#Add montage from inbuilt file
Montage = mne.channels.make_standard_montage('GSN-HydroCel-256')
ch_names = Montage.ch_names
ch_types = ['eeg'] *256
sfreq = 250
info = mne.create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types)
#Create Epochs
epochs = mne.EpochsArray(epochs_data, info=info, events=events, tmin = -0.096,
event_id={'arbitrary': 1} )
epochs.ref_channels='average'
epochs.baseline = (-.096, 0)
#Add projection
epochs.set_eeg_reference(projection=True)
epochs.apply_proj()
picks = mne.pick_types(info, meg=False, eeg=True, misc=False)
#Downsample and crop times
epochs.resample(50, npad='auto') #DS because of lots of data
epochs.crop(None, .3)
epochs.set_montage(Montage)
#Calculate forward model
fwd = mne.make_forward_solution(epochs.info, trans='fsaverage', src=src, meg = False,
bem=bem, eeg=True, mindist=5.0, n_jobs=1)
#Inverse operator
inverse_operator = make_inverse_operator(
epochs.info, fwd, noise_cov, loose='auto', depth=5.0)
#Perform inversion
method = "MNE" # also tried dSPM, sLORETA
snr = 3. #Standard
lambda2 = 1. / snr ** 2 #standard
stc = apply_inverse_epochs(epochs, inverse_operator,
lambda2, method=method, pick_ori=None, verbose=True)
Then I create a design variable, and ‘add’ the STCs together so that I have a list of around 15k STCs. The design is basically three column vectors (1st tracks subject, 2nd and 3rd are predictive error measures).
if sub_counter == 1:
design_store = design
stc_store = stc
else:
design_store = np.append(design_store, design, axis =0)
stc_store = stc_store+stc
sub_counter += 1
And finally I run the regression:
LR_Results = linear_regression(stc_store,design_store)
But as I said at the top, this results in all voxels (5124 of them, as I said, downsampled), at all time points (21, again downsampled) being significant, whether the input is ‘real’ or basically random data.
I guess I am wondering if I am doing anything spectacularly wrong here? I have run the analysis on individuals, gotten their Beta values, and then run a one sample t-test on their data, and I get some ‘reasonable’ results. In that case, if I use cluster-based correction (using mne.stats.spatio_temporal_cluster_1samp_test) I get around 5 clusters which are significant throughout the time frame (which doesn’t seem plausible), but if I instead use FDR correction, the results look more reasonable. But I read somewhere that unless you are actually comparing two conditions, i.e. subtracting one condition from another per subject, then this method has some issues…
Anyway, sorry for the long post. Hopefully there is the right information in there, and thanks to anyone who can shed some light on this for me!
Cheers,
Jordan