hello ,everyone,
I use the dspm method to calculate the source and want to standardize the source data using z score. I do the following calculation:
1.extract the mean value (u) and standard deviation (g) of each voxel of baseline segment
2. compute (x-u)/g , x is the numpy array of source estimate data.
what exactly is the question here?
@yuyu - the dspm is already a statistic (if anatomically aligned source orientations its a T-stat and if xyz oriented at each vertex its an F-stat).
Handwavy math - its the mean projected signal / projected noise variance. If you assess your noise covariance from the prestim time, then its already essentially what you want. And after a certain number of trials the central limit theorem makes a T almost equivalent to the normal dist.
You probably already have what you need with the dSPM (assuming the math details above are true)
-Jeff
Thanks , my question is, whether I do calculate correctly ? Some papers use z-scores to show source results. In brainstom, I found out about the Z-score transformation of brain maps in the tutorial (Tutorials/SourceEstimation - Brainstorm). So , I want to achieve this function in MNE.
@yuyu - I think from the brainstorm site it appears they are doing z-score on the minimum norm estimate and not the dSPM. These are similar, but dSPM is a statistic and MN is not. The output they get from this states: "We can appreciate that the standardized map is now qualitatively similar to dSPM and sLORETA versions. "
Anyways - I think this is how you would do this in python:
import numpy as np
import mne
import copy
baseline = (-0.5, 0) #<<--Your baseline times
time_idxs = np.where((stc.times > baseline[0]) * (stc.times < baseline[1]))[0]
#baseline mean and standard deviation
mu = stc._data[:, time_idxs].mean(axis=1)
std = stc._data[:, time_idxs].std(axis=1)
z_stc = copy.deepcopy(stc)
z_stc._data = (stc._data - mu[:,np.newaxis]) / std[:,np.newaxis]
–Jeff