Can MVPA be applied to differentiate between same kind of stimulus (tactile) ?

@agramfort

Hello fellow MNE users. My greetings. My friend @Rekha seems to have posted a similar question already on the forum, but I am creating a new thread so that it helps anybody who’s been facing similar problem.

Basically, we’re new to Machine Learning stuffs and are interested in learning what we can do with the data we already have for patients with typically developing children and those with CP ( cerebral palsy).

So what we do in our experiment is that We provide passive pneumatic stimulation to the three fingers of our subjects ( thumb D1, middle finger D3, pinky finger D5) in pseudorandom order (100 events for each across one run) and then see the ERP as well as ERF from (-200 ms before the start of stimulus to until 500 ms after the stimulus onset ) for each events for both typically developing children and those with hemiplegic cerebral palsy.

Now what we are thinking of doing is to look at the how the brain might be able to accurately encode and decode which finger has been stimulated ( whether D1, D3 or D5) in typically developing children and how this might not be the case for children with cerebral palsy.

In the MVPA tutorial on MNE site, we saw the code for comparing Auditory left vs Visual left conditions. As we are trying to differentiate between same kind of stimulus(pneumatic stimulus) but across different fingers in the same hand, we were wondering if the MVPA would really be applicable in our case? Does the auditory left vs visual left model apply to our case as well?

Can someone who has already worked encoding and decoding guide us with what technique we could follow to make the most out of our already available data?

Hello @Bamford and welcome to the forum!

In principle, you can try to decode the differences between any two experimental conditions.

In practice, I believe it could be very difficult to find different neural signatures between fingers, as even if they’re not the immediately neighboring fingers like in your case, the somatosensory representations of all fingers is quite close to one another in the cortex. So I wouldn’t really expect to find different cortical activation patterns given the spatial resolution of MEG and EEG – but of course this is just my hypothesis, and I didn’t check this.

For later cortical responses, potentially involving cognitive effects, things might look different; but again I’m not sure.

Your best bet is probably to just try it out and see what you get.

Best wishes,
Richard