Hello fellow MNE users. My greetings. My friend @Rekha seems to have posted a similar question already on the forum, but I am creating a new thread so that it helps anybody who’s been facing similar problem.
Basically, we’re new to Machine Learning stuffs and are interested in learning what we can do with the data we already have for patients with typically developing children and those with CP ( cerebral palsy).
So what we do in our experiment is that We provide passive pneumatic stimulation to the three fingers of our subjects ( thumb D1, middle finger D3, pinky finger D5) in pseudorandom order (100 events for each across one run) and then see the ERP as well as ERF from (-200 ms before the start of stimulus to until 500 ms after the stimulus onset ) for each events for both typically developing children and those with hemiplegic cerebral palsy.
Now what we are thinking of doing is to look at the how the brain might be able to accurately encode and decode which finger has been stimulated ( whether D1, D3 or D5) in typically developing children and how this might not be the case for children with cerebral palsy.
In the MVPA tutorial on MNE site, we saw the code for comparing Auditory left vs Visual left conditions. As we are trying to differentiate between same kind of stimulus(pneumatic stimulus) but across different fingers in the same hand, we were wondering if the MVPA would really be applicable in our case? Does the auditory left vs visual left model apply to our case as well?
Can someone who has already worked encoding and decoding guide us with what technique we could follow to make the most out of our already available data?