- MNE version: 1.8.0
- Operating system: Windows 11
Hi all,
I’m having issues with my TFr plots, the percentage change of spectral power when compared to baseline is wild (high variability) despite my data going through rigorous cleaning and changing my baseline a couple times. I have tried converting this to log dB but it’s not what I want in the end.
My aim is to get uniform TFr’s with a standardised percentage change variability of ±50% between conditions.
I’m working with data from an experiment in which people walk back and forth while navigating an obstacle in various lighting conditions.
I have isolated epochs of each “trial” (walk) to a preperation, crossing and reset duration.
epoch = np.hstack((preparation, cross, reset))
This is important as the baseline for the TFr plots are based from a median of the preparation and reset time. This is how the tfr is computed and the baseline is prepared.
Please pay attention to this line (corrected_data = 100 * (10 ** log_corrected - 1)
) as this is how I’m trying to calculate percentage change.
# Compute TFR
tfr = epochs.compute_tfr(
freqs=freqs,
n_cycles=n_cycles,
use_fft=True,
return_itc=False,
method='morlet',
picks=picks,
decim=1,
n_jobs=1
)
# Loop over the number of epochs
for i in range(n_epochs):
cs_time = cs_offsets[i] / sfreq
ce_time = ce_offsets[i] / sfreq
prep_mask = (times >= 0) & (times < cs_time)
reset_mask = (times >= ce_time) & (
times < ce_time + config.post_crossing_sec)
# Baseline is concat of prep and reset
baseline_mask = prep_mask | reset_mask
baseline_power = log_data[i][:, baseline_mask]
trial_baseline = baseline_power.mean(axis=1)
all_baselines.append(trial_baseline)
kept_indices.append(i)
baseline_matrix = np.stack(all_baselines)
median_baseline_spectrum = np.median(
baseline_matrix, axis=0, keepdims=True).T
log_corrected = log_data[kept_indices] - median_baseline_spectrum
# PLEASE LOOK AT THIS LINE vv
corrected_data = 100 * (10 ** log_corrected - 1)
tfr.data = corrected_data [:, np.newaxis, :, :]
# Debug percent change
print("Min percent change:", tfr.data.min())
print("Max percent change:", tfr.data.max())
tfr_array = tfr.data
np.save('<some path>', tfr_array)
# ...
This is the results I’m recieving from the min/max percent change. As you can see, it’s wild.
Min percent change: -99.99995967740442
Max percent change: 5204.678869643241
Min percent change: -99.99994584982952
Max percent change: 2828.397356550047
Min percent change: -99.99993270588809
Max percent change: 4052.744467446919
I am then looping over these TFr’s, averaging them per condition and plotting those averages.
# Gets the mean of all tfrs within a condition
mean_tfr = np.mean(all_tfrs_in_condition, axis=0)
# ...
freqs = np.linspace(3.0, 35.0, n_freqs)
# Create time axis in milliseconds
times_ms = np.linspace(-prep_sec * 1000, total_sec * 1000, n_times)
# Define extent for imshow
extent = [times_ms[0], times_ms[-1], freqs[0], freqs[-1]]
# ...
plt.imshow(mean_tfr, aspect='auto', origin='lower',
extent=extent, cmap='Spectral_r')
I would like to know where I am going wrong here. I have implemented ICA, ICLabel and autoreject in my preprocessing and I’m thinking my data is pretty clean up to this point.
Your help is immensely appreciated and valued. Thank you for reading.