fast SPoC for continuous target decoding

Hello everyone,

I am extensively using SPoC alrogithm for my research to do decoding at many time time windows and many participants. It is unfortunately pretty slow and requires the use of a cluster, which creates it’s own overheads. Since SPoC is an algorithm supposed to be useful for BCI I was thinking that it would be good to have a faster version of it.

I am currently working on a faster version of SPoC algorithm. Basically I’ve already done a parallel version and I’m trying to make a GPU version of what is already in MNE (the main difficulty is to GPU-ize regularized covariance computations, to compute many of them at once). I already have a first version but I don’t have enough time to properly debug it. So I am looking for other people interested in joining this project.

Anyone willing to join? I was suggested to open an issue in MNE GitHub but I thought that I’ll ask here first.

P.S. some people suggested me to look at pyRiemann and I did but I haven’t found any regression algorithms there.

Hello,

You mention that pyRiemann didn’t have the algorithms you were looking for, but SPoC is implemented there: pyriemann.spatialfilters.SPoC — pyRiemann 0.9 documentation. Or do you mean that a GPU-compatible version of the algorithm is not supported?

I’m also a bit surprised that you’re finding the algorithm so slow. Granted covariance computation can be costly, but once the filters are fit to the training data, applying them to and transforming each time window like you would for a typical BCI paradigm is extremely quick.

What settings are you used to compute the covariance matrices?
E.g., MNE’s auto method for regularisation is thorough, but slow. Could you get similar decoding performance with something like the ledoit_wolf method for automatic shrinkage?
You could also try a rank subspace projection to speed up covariance computation using the rank parameter. That essentially means there are fewer ‘channels’ that covariance needs to be computed between (assuming decoding performance is still acceptable).

I’m talking about GPU version of SPoC, yes.

When’s I’m talking about slowness I’m talking about fitting, not inference. Not fitting just one SPoC version but fitting thousands of them at the same time (I’m doing a time decoding generalisation study, not conventional BCI stuff)

The biggest slowdown is where python loops are used which makes batch processing slow.