Can I reconstruct the input signal with the output of `tfr_array_morlet`

  • MNE version: e.g. 1.6.1
  • operating system: Windows 11
from mne.time_frequency import tfr_array_morlet
fs=2048
n_time = np.arange(0, 5, 1/fs)
source_1 = np.sin(3*2*np.pi*n_time+2) + 8 #3hz
source_2 = np.sin(2*2*np.pi*n_time+10) + 9 #2hz

ob_1 = source_1 + source_2 #channel 1
ob_2 = source_1 * 1/2 + source_2 #channel 2
original_signal = np.array([ob_1,ob_2])[np.newaxis,:]

ob_spectra = tfr_array_morlet(original_signal,fs,np.arange(1,5),n_cycles=3,output='complex',n_jobs=-1)

How can I use ob_spectra to reconstruct the original_signal with the same scale?
Thanks!

Hi,

I don’t think you can do it in MNE (though I may be wrong about this).

Maybe you could take a look here to achieve what you want:
https://pywavelets.readthedocs.io/en/latest/ref/idwt-inverse-discrete-wavelet-transform.html

Cheers,

Thanks for the swift reply! I’ll try it out!

1 Like