I am interested in computing the peak frequency within a band for each
source.
So my first question is does this already exists somewhere?
If not what would people suggest as a jumping off point for developing it.
I was thinking of following apply_inverse in inverse.py until I get the
final inverse operator (K in line 753) and then looping through each row
(source) in K to compute the time course and peak power and frequency. Any
other suggestions or downsides to this approach?
you can always use numpy.argmax and argsort functions to quickly navigate
through peaks inside the resulting arrays.
If not what would people suggest as a jumping off point for developing it.
I was thinking of following apply_inverse in inverse.py until I get the
final inverse operator (K in line 753) and then looping through each row
(source) in K to compute the time course and peak power and frequency. Any
other suggestions or downsides to this approach?
Maybe let's first see whether what is implemented so far gives you what
you're looking for.
I hope this helps + cheers,
Denis
Thanks,
Luke
_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu Mne_analysis Info Page
The information in this e-mail is intended only for the person to whom it
is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you in
error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.
I'm interested in the peak frequency within a band not the power timecourse
of the band so source_band_induced_power isn't what I want. I also want to
stick with MNE, as opposed to DICS or another beamformer, for localization
since I will want to compare with mne/dspm power estimates.
The other complicating factor is that I'm using resting state data (~~ 5
minutes), so most of the inverse operator code in python runs into memory
problems. Otherwise I could just do apply_inverse and then work on the
returned timecourses in the stc.
once I have a time course for each source finding the peak frequency will
be pretty straight forward using numpy.fft and numpy.fft.fftfreq.
I think you want to either use compute_source_psd or (for raw data) or
compute_source_psd_epochs (for epoched data). The functions are in
mne.minimum_norm.time_frequency. There is an example here:
I did not get what you were looking for when reading your mail first. I'm
glad the example Martin posted does what you were looking for.
so most of the inverse operator code in python runs into
memory problems.
By the way when working with resting state data you can also generate epochs
using `mne.event.make_fixed_length_events` and compute source power on
epochs using `mne.minimum_norm.compute_source_psd_epochs` with the option
`return_generator` set to True. This will avoid loading all data into
memory by not returning a list of source estimates but a generator you can
unfold in a simple loop, such that only one source estimate is processed at
a given iteration.
Or you can pass it to functions that do the unpacking as required.
Is there a way to disable parallel jobs in the testing...
nosetest hangs with the following error.
Test cluster level permutations T-test ... [Parallel(n_jobs=2)]: Done 1
out of 2 | elapsed: 0.1s remaining: 0.1s
[Parallel(n_jobs=2)]: Done 2 out of 2 | elapsed: 0.1s finished
[Parallel(n_jobs=2)]: Done 1 out of 2 | elapsed: 0.1s remaining:
0.1s
[Parallel(n_jobs=2)]: Done 2 out of 2 | elapsed: 0.1s finished
Process PoolWorker-401:
Traceback (most recent call last):
File "/usr/lib64/python2.6/multiprocessing/process.py", line 232, in
_bootstrap
Process PoolWorker-402:
Traceback (most recent call last):
File "/usr/lib64/python2.6/multiprocessing/process.py", line 232, in
_bootstrap
self.run()
File "/usr/lib64/python2.6/multiprocessing/process.py", line 88, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib64/python2.6/multiprocessing/pool.py", line 57, in worker
self.run()
File "/usr/lib64/python2.6/multiprocessing/process.py", line 88, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib64/python2.6/multiprocessing/pool.py", line 57, in worker
task = get()
File "/usr/lib64/python2.6/multiprocessing/queues.py", line 352, in get
return recv()
TypeError: type 'partial' takes at least one argument
task = get()
File "/usr/lib64/python2.6/multiprocessing/queues.py", line 352, in get
return recv()
TypeError: type 'partial' takes at least one argument