modules or extensions based on MNE-Python for EEG meta-workflow

Dear Community,

Iā€™m working on implementing a module based on MNE-Python for advanced EEG meta-analyses. The project involves handling a large cohort of EEG data (150 coma patients), several protocols and analyses (e.g., ERPs, resting-state, frequency analysis, dwPLI, DMVA, etc.), multiple EEG acquisition systems, as well as individual and group-level statistics. I have already implemented everything as scripts, and I am now considering how to possibly package and share these tools.

Do you have any recommendations for existing MNE extensions that could support this workflow purpose ?

For instance, Iā€™ve come across projects like the MNE-BIDS-pipeline, Eelbrain and Pyprep. Any advice or feedback from the community would be greatly appreciated!
Thanks in advance :slight_smile:

2 Likes

First off, thank you for being willing to take the time to package and share your workflow.

On how to actually do it, it depends on what you are aiming for, so perhaps you could tell us a little more about what you want the ā€œuser experienceā€ to be for these tools.

If you mainly want others to be able to understand and be able to reproduce your analyses on this particular dataset of 150 coma patients, then you mostly need a ā€œcleanā€, well-organized, version of the scripts you have now. Once you have that, ā€œpackaging and sharingā€ them is in my opinion best done by making a github repository for them containing your scripts, as well as a well written README.md file that explains:

  • a short description of the data and what the analysis pipeline does. And of course a link to your paper.
  • what packages you need to run everything. Itā€™s also appreciated to create a requirements.txt file that lists the packages, so the user can do pip install -r requirements.txt to install them all in one command.
  • where to find the data. Or even better, supply a script that downloads all the data and puts it in the proper location so the analysis scripts will find it.
  • how to start the analysis pipeline. For example python run_all.py or something. Or: run these scripts in this order.
  • in which folder to find the results produced by the analysis scripts

Iā€™m sure the analysis scripts you wrote are already really good. However, should you be open to ideas for further improvements, I thought a lot about how to structure EEG/MEG data analysis scripts to make the whole thing less daunting for others to understand. I have some writings and ramblings about this topic:

4 Likes

Thank you, Marjin, for sharing your expertise!

I really appreciate the elegant way you structure your pipeline. Itā€™s sober, efficient, and clear, with great flexibility for running analyses on one or multiple subjects. It is great to learn about ā€˜Doitā€™ and how you manage filenames!

To answer your relevant question: my primary goal is to refine the scripts so they can be shared with collaborators (working on the entire dataset and/or adding new analyses) and clinicians (to enable them to analyze new patient data as independently as possible).
Iā€™m working with clinicians from different cities, whom Iā€™m gently introducing to Pythonā€”most of the time remotely. So, the ā€˜user experienceā€™ should be as simplified as possible. :sweat_smile:
For now, Iā€™ve been avoiding too much reliance on the command line and instead using specific scripts that call general functions for each step. But Iā€™m rethinking this approach now.

These developments are for now on a private GitHub repository, but a published partial example is available here.

Thanks again for your help! Iā€™ll let you know if I come across other relevant ideas to complement yours to organize this ā€˜packageā€™.
Maybe Iā€™ll see you at the next CuttingEEG event :wink:
Best

1 Like