GSoC Idea, Improving decode module

Hello everyone,

I am looking forward to participate in GSoC and I am interested in the idea
of improving the decode module
<https://github.com/mne-tools/mne-python/wiki/GSOC-Ideas#3-improve-the-decoding-module>.
I have installed and set up the development environment and have been
trying to get familiar with various modules. However being quite new to
MEG, EEG I'm looking for some pointers to start as well as prerequisites to
work on decode module.
Lastly I apologize if I have been rude in any manner.

Thank you
Asish Panda
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20160226/cb25665d/attachment.html

Hi Asish,

Thanks for your interest!.

You can start with one of these easy PR:
https://github.com/mne-tools/mne-python/issues/2874
https://github.com/mne-tools/mne-python/issues/2176
https://github.com/mne-tools/mne-python/issues/2189 (probably needs a bit
of discussion)

Once you're there I can suggest you some more fun things that you could do
to set up a cleaner framework for the decoding module.

All the best,

Jean-R?mi

Hello Jean

Thank you very much for your response and the issues. I will get my hand
dirty right away! :slight_smile:

Thank you
Asish Panda

Hi Jean,

I have been going through the idea list and I felt that a discussion is
needed before I start drafting the proposal. From what I have understood
so far, we have to:
1) Refactor `decoding` objects, GAT and EMS so that it works with `cross
validation` and `grid search` of scikit-learn. They should also work with
multiclass problems.
2) Simplify user interface by calling `EpochsVectorizer` internally.

Is that the main goal that should be achieved by the end of GSoC? Or is
there anything else that is expected?

In summary, this project will involve a series of usability improvements
for the decoding module and extend its functionality.

I feel the above statement is quite vague for writing a detailed plan in
the proposal. Or perhaps the "improvements" can only be known while the
objectives(listed above) are being fulfilled?
Lastly, going a little out of topic, could you now please elaborate on how
to set up the cleaner framework of the decoding module, that you mentioned
in the last message?

Thank you
Asish Panda

Hi Asish,

let me reply inline

Hi Jean,

I have been going through the idea list and I felt that a discussion is
needed before I start drafting the proposal. From what I have understood
so far, we have to:
1) Refactor `decoding` objects, GAT and EMS so that it works with `cross
validation` and `grid search` of scikit-learn. They should also work with
multiclass problems.
2) Simplify user interface by calling `EpochsVectorizer` internally.

Is that the main goal that should be achieved by the end of GSoC? Or is
there anything else that is expected?

this would be one important element of the GSOC, but as you may have
noticed the entire decoding module needs some love and care.

In summary, this project will involve a series of usability improvements

for the decoding module and extend its functionality.

I feel the above statement is quite vague for writing a detailed plan in
the proposal. Or perhaps the "improvements" can only be known while the
objectives(listed above) are being fulfilled?
Lastly, going a little out of topic, could you now please elaborate on how
to set up the cleaner framework of the decoding module, that you mentioned
in the last message?

For example, currently we have the problem that some parts of the MNE tools
are not nicely pluggable into the scikit-learn Pipeline objects. We have
all over the place functions that return numpy arrays, others return Epochs
or Raw. There is a lot of work to be done to unify the APIs. So fare the
admittedly vague idea is to make it fun to use the decoding module to
combine different elements of MNE processing functions into powerful
scikit-learn pipelines. Another issue that we have is that we don't have
any nice persistance mechanism to store classifiers and outputs from our
decoding objects. There is also more to do on vizualization. E.g. make it
easy to visualize standard ML diagnostics like leanring curves, etc. If you
want to get the intuition the idea is to make the coding gap between
sklearn and MNE significantly tighter. But once we decide to go in this
direction we will of course agree on a detailed proposal.

I hope this is somewhat satisfying while stimulating your curiosity.

--Denis

Thank you
Asish Panda

Hello Jean

Thank you very much for your response and the issues. I will get my hand
dirty right away! :slight_smile:

Thank you
Asish Panda

Hi Asish,

Thanks for your interest!.

You can start with one of these easy PR:
BUG: GAT.plot_diagonal with predict_proba and AUC scoring fails b/c calculating chance levels doesn't work · Issue #2874 · mne-tools/mne-python · GitHub
ENH: GAT object should scorer strings as in sklearn.cross_val_score at all. · Issue #2176 · mne-tools/mne-python · GitHub
I/O for time gen class · Issue #2189 · mne-tools/mne-python · GitHub (probably needs a
bit of discussion)

Once you're there I can suggest you some more fun things that you could
do to set up a cleaner framework for the decoding module.

All the best,

Jean-R?mi

Hello everyone,

I am looking forward to participate in GSoC and I am interested in the
idea of improving the decode module
<GSoC Ideas · mne-tools/mne-python Wiki · GitHub.
I have installed and set up the development environment and have been
trying to get familiar with various modules. However being quite new to
MEG, EEG I'm looking for some pointers to start as well as prerequisites to
work on decode module.
Lastly I apologize if I have been rude in any manner.

Thank you
Asish Panda

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

The information in this e-mail is intended only for the person to whom
it is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you
in error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

The information in this e-mail is intended only for the person to whom
it is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you
in error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

The information in this e-mail is intended only for the person to whom it
is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you in
error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20160310/3d90ca5e/attachment-0001.html

Hi Asish,

As Denis said, the decoding module is one possible target. Just FYI, there
are other possibilities too: e.g. across-subjects stats and viz isn't
really well developed/documented.

Currently the decoding classes have been developed separately, by different
authors and with different architectures. IMO, one great goal would thus be
to

*1. (hard*) homogenize the existing functions so that they all become
strictly compatible with sklearn (i.e, based on BaseEstimator_, using fit,
transform, predict and score methods).

*2.* (*medium-hard*): develop transformer objects that would ultimately
allow the users to pipe multiple processing steps: e.g. we typically aim at
getting:
make_pipeline(TimeFreq(), InverseTransform(), DataVectorizer(),
LogisticRegression())
or
make_pipeline(Filter(10, 30), Covariances(method='shrunk'),
Xdawn(n_components=4), TangentSpace(), SVM(kernel='linear'))

for which all the steps could be typically initialized with inst.info and
would take an X and a y to be fitted/predicted/scored.

*3. (easy*) Setup a systematic i/o to store the estimators, the predictions
and the scores.

As a concrete example, to optimize memory and CPU, the GAT currently stores
the predictions (y_pred_) in the object, and the scoring approach is
performed outside the CV. This storing and scoring isn't following sklearn
API. Consequently, one cannot use cross_val_score(GAT). Typically
refactoring this kind of feature requires some deep thinking because,
unlike sklearn, several decoding module are applied in a "mass
multivariate" way: i.e. many multivariate models are fitted on
independent/partially common/or even identical data. Optimizing memory and
CPU is thus probably the main challenge here.

I would consequently start by tackling the easy/medium problem first (e.g.
i/o in all decoding classes, vizualizing the fitted weights/patterns for
each decoding method), and see how we can develop some transformers, such
as EpochVectorizer, that would be common across decoding modules to format.

Hope this helps,

JR

In summary, this project will involve a series of usability improvements

for the decoding module and extend its functionality.

I feel the above statement is quite vague for writing a detailed plan in
the proposal. Or perhaps the "improvements" can only be known while the
objectives(listed above) are being fulfilled?
Lastly, going a little out of topic, could you now please elaborate on how
to set up the cleaner framework of the decoding module, that you mentioned
in the last message?

Thank you
Asish Panda

Hello Jean

Thank you very much for your response and the issues. I will get my hand
dirty right away! :slight_smile:

Thank you
Asish Panda

Hi Asish,

Thanks for your interest!.

You can start with one of these easy PR:
BUG: GAT.plot_diagonal with predict_proba and AUC scoring fails b/c calculating chance levels doesn't work · Issue #2874 · mne-tools/mne-python · GitHub
ENH: GAT object should scorer strings as in sklearn.cross_val_score at all. · Issue #2176 · mne-tools/mne-python · GitHub
I/O for time gen class · Issue #2189 · mne-tools/mne-python · GitHub (probably needs a
bit of discussion)

Once you're there I can suggest you some more fun things that you could
do to set up a cleaner framework for the decoding module.

All the best,

Jean-R?mi

Hello everyone,

I am looking forward to participate in GSoC and I am interested in the
idea of improving the decode module
<GSoC Ideas · mne-tools/mne-python Wiki · GitHub.
I have installed and set up the development environment and have been
trying to get familiar with various modules. However being quite new to
MEG, EEG I'm looking for some pointers to start as well as prerequisites to
work on decode module.
Lastly I apologize if I have been rude in any manner.

Thank you
Asish Panda

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

The information in this e-mail is intended only for the person to whom
it is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you
in error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

The information in this e-mail is intended only for the person to whom
it is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you
in error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

The information in this e-mail is intended only for the person to whom it
is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you in
error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20160310/f6927867/attachment.html

Hi guys,

PyMVPA might be a good place to look for inspiration and maybe integration: http://www.pymvpa.org/

They have a really nice workflow and API.

Best,
Phillip

Hi Asish,

As Denis said, the decoding module is one possible target. Just FYI, there are other possibilities too: e.g. across-subjects stats and viz isn't really well developed/documented.

Currently the decoding classes have been developed separately, by different authors and with different architectures. IMO, one great goal would thus be to

1. (hard) homogenize the existing functions so that they all become strictly compatible with sklearn (i.e, based on BaseEstimator_, using fit, transform, predict and score methods).

2. (medium-hard): develop transformer objects that would ultimately allow the users to pipe multiple processing steps: e.g. we typically aim at getting:
make_pipeline(TimeFreq(), InverseTransform(), DataVectorizer(), LogisticRegression())
or
make_pipeline(Filter(10, 30), Covariances(method='shrunk'), Xdawn(n_components=4), TangentSpace(), SVM(kernel='linear'))

for which all the steps could be typically initialized with inst.info and would take an X and a y to be fitted/predicted/scored.

3. (easy) Setup a systematic i/o to store the estimators, the predictions and the scores.

As a concrete example, to optimize memory and CPU, the GAT currently stores the predictions (y_pred_) in the object, and the scoring approach is performed outside the CV. This storing and scoring isn't following sklearn API. Consequently, one cannot use cross_val_score(GAT). Typically refactoring this kind of feature requires some deep thinking because, unlike sklearn, several decoding module are applied in a "mass multivariate" way: i.e. many multivariate models are fitted on independent/partially common/or even identical data. Optimizing memory and CPU is thus probably the main challenge here.

I would consequently start by tackling the easy/medium problem first (e.g. i/o in all decoding classes, vizualizing the fitted weights/patterns for each decoding method), and see how we can develop some transformers, such as EpochVectorizer, that would be common across decoding modules to format.

Hope this helps,

JR

In summary, this project will involve a series of usability improvements for the decoding module and extend its functionality.
I feel the above statement is quite vague for writing a detailed plan in the proposal. Or perhaps the "improvements" can only be known while the objectives(listed above) are being fulfilled?
Lastly, going a little out of topic, could you now please elaborate on how to set up the cleaner framework of the decoding module, that you mentioned in the last message?

Thank you
Asish Panda

Hello Jean

Thank you very much for your response and the issues. I will get my hand dirty right away! :slight_smile:

Thank you
Asish Panda

Hi Asish,

Thanks for your interest!.

You can start with one of these easy PR:
BUG: GAT.plot_diagonal with predict_proba and AUC scoring fails b/c calculating chance levels doesn't work · Issue #2874 · mne-tools/mne-python · GitHub
ENH: GAT object should scorer strings as in sklearn.cross_val_score at all. · Issue #2176 · mne-tools/mne-python · GitHub
I/O for time gen class · Issue #2189 · mne-tools/mne-python · GitHub (probably needs a bit of discussion)

Once you're there I can suggest you some more fun things that you could do to set up a cleaner framework for the decoding module.

All the best,

Jean-R?mi

Hello everyone,

I am looking forward to participate in GSoC and I am interested in the idea of improving the decode module. I have installed and set up the development environment and have been trying to get familiar with various modules. However being quite new to MEG, EEG I'm looking for some pointers to start as well as prerequisites to work on decode module.
Lastly I apologize if I have been rude in any manner.

Thank you
Asish Panda

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
Mne_analysis Info Page

The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
MyComplianceReport.com: Compliance and Ethics Reporting . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
Url : http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20160311/8ba2f0d1/attachment-0001.bin

Hello everyone

Thank you for explaining me the details. Based on that and the original
idea I have drafted an initial proposal. Please give me your reviews and
let me know if I am understanding your points correctly. You can check out
the
project details section in the wiki page
<https://github.com/kaichogami/mne-python/wiki/GSoC-Proposal#project-detail>
.

Thank you
Asish Panda

Thanks Asish.

It's good overall. I added some corrections.

Hope that helps,

JR

Looks also good to me.

At the same time it's ambitions, which is great! We should, however, see
what are the most important goals such that the rest can be seen as nice to
have additions but won't determine your overall GSOC success.
One tiny remark: we should not go use pickling for persistence for several
reasons. Short: it's not made for long-lived persistence and will break. We
will rather look into something like saving the estimator attributes and
their constructor parameters into hd5 files and re-instantiate the objects
based on this information.
We might need a hangoug or skype together to decide about the priorities of
all the elements you listed.

--Denis

Looks also good to me.

At the same time it's ambitions, which is great! We should, however, see
what are the most important goals such that the rest can be seen as nice to
have additions but won't determine your overall GSOC success.
One tiny remark: we should not go use pickling for persistence for several
reasons. Short: it's not made for long-lived persistence and will break. We
will rather look into something like saving the estimator attributes and
their constructor parameters into hd5 files and re-instantiate the objects
based on this information.
We might need a hangoug or skype together to decide about the priorities
of all the elements you listed.

I agree. We should have priorities sorted out. When would be a good idea to
have a hangout? I am usually free after 10pm, +5:30GMT.

--Denis

Thanks Asish.

It's good overall. I added some corrections.

Thank you for your help! :slight_smile:

Hope that helps,

JR

Hello everyone

Thank you for explaining me the details. Based on that and the original
idea I have drafted an initial proposal. Please give me your reviews and
let me know if I am understanding your points correctly. You can check out
the
project details section in the wiki page
<https://github.com/kaichogami/mne-python/wiki/GSoC-Proposal#project-detail&gt;
.

Thank you
Asish Panda

Hi guys,

PyMVPA might be a good place to look for inspiration and maybe
integration: http://www.pymvpa.org/

They have a really nice workflow and API.

Best,
Phillip

>
> Hi Asish,
>
> As Denis said, the decoding module is one possible target. Just FYI,
there are other possibilities too: e.g. across-subjects stats and viz isn't
really well developed/documented.
>
> Currently the decoding classes have been developed separately, by
different authors and with different architectures. IMO, one great goal
would thus be to
>
> 1. (hard) homogenize the existing functions so that they all become
strictly compatible with sklearn (i.e, based on BaseEstimator_, using fit,
transform, predict and score methods).
>
> 2. (medium-hard): develop transformer objects that would ultimately
allow the users to pipe multiple processing steps: e.g. we typically aim at
getting:
> make_pipeline(TimeFreq(), InverseTransform(), DataVectorizer(),
LogisticRegression())
> or
> make_pipeline(Filter(10, 30), Covariances(method='shrunk'),
Xdawn(n_components=4), TangentSpace(), SVM(kernel='linear'))
>
> for which all the steps could be typically initialized with inst.info
and would take an X and a y to be fitted/predicted/scored.
>
> 3. (easy) Setup a systematic i/o to store the estimators, the
predictions and the scores.
>
> As a concrete example, to optimize memory and CPU, the GAT currently
stores the predictions (y_pred_) in the object, and the scoring approach is
performed outside the CV. This storing and scoring isn't following sklearn
API. Consequently, one cannot use cross_val_score(GAT). Typically
refactoring this kind of feature requires some deep thinking because,
unlike sklearn, several decoding module are applied in a "mass
multivariate" way: i.e. many multivariate models are fitted on
independent/partially common/or even identical data. Optimizing memory and
CPU is thus probably the main challenge here.
>
> I would consequently start by tackling the easy/medium problem first
(e.g. i/o in all decoding classes, vizualizing the fitted weights/patterns
for each decoding method), and see how we can develop some transformers,
such as EpochVectorizer, that would be common across decoding modules to
format.
>
> Hope this helps,
>
>
> JR
>
>
> In summary, this project will involve a series of usability
improvements for the decoding module and extend its functionality.
> I feel the above statement is quite vague for writing a detailed plan
in the proposal. Or perhaps the "improvements" can only be known while the
objectives(listed above) are being fulfilled?
> Lastly, going a little out of topic, could you now please elaborate
on how to set up the cleaner framework of the decoding module, that you
mentioned in the last message?
>
> Thank you
> Asish Panda
>
> Hello Jean
>
> Thank you very much for your response and the issues. I will get my
hand dirty right away! :slight_smile:
>
> Thank you
> Asish Panda
>
> Hi Asish,
>
> Thanks for your interest!.
>
> You can start with one of these easy PR:
> https://github.com/mne-tools/mne-python/issues/2874
> https://github.com/mne-tools/mne-python/issues/2176
> https://github.com/mne-tools/mne-python/issues/2189 (probably needs
a bit of discussion)
>
> Once you're there I can suggest you some more fun things that you
could do to set up a cleaner framework for the decoding module.
>
> All the best,
>
> Jean-R?mi
>
> Hello everyone,
>
> I am looking forward to participate in GSoC and I am interested in
the idea of improving the decode module. I have installed and set up the
development environment and have been trying to get familiar with various
modules. However being quite new to MEG, EEG I'm looking for some pointers
to start as well as prerequisites to work on decode module.
> Lastly I apologize if I have been rude in any manner.
>
> Thank you
> Asish Panda
>
> _______________________________________________
> Mne_analysis mailing list
> Mne_analysis at nmr.mgh.harvard.edu
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/mne_analysis
>
>
> The information in this e-mail is intended only for the person to
whom it is
> addressed. If you believe this e-mail was sent to you in error and
the e-mail
> contains patient information, please contact the Partners Compliance
HelpLine at
> http://www.partners.org/complianceline . If the e-mail was sent to
you in error
> but does not contain patient information, please contact the sender
and properly
> dispose of the e-mail.
>
>
>
> _______________________________________________
> Mne_analysis mailing list
> Mne_analysis at nmr.mgh.harvard.edu
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/mne_analysis
>
>
> The information in this e-mail is intended only for the person to
whom it is
> addressed. If you believe this e-mail was sent to you in error and
the e-mail
> contains patient information, please contact the Partners Compliance
HelpLine at
> http://www.partners.org/complianceline . If the e-mail was sent to
you in error
> but does not contain patient information, please contact the sender
and properly
> dispose of the e-mail.
>
>
>
>
> _______________________________________________
> Mne_analysis mailing list
> Mne_analysis at nmr.mgh.harvard.edu
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/mne_analysis
>
>
> The information in this e-mail is intended only for the person to
whom it is
> addressed. If you believe this e-mail was sent to you in error and
the e-mail
> contains patient information, please contact the Partners Compliance
HelpLine at
> http://www.partners.org/complianceline . If the e-mail was sent to
you in error
> but does not contain patient information, please contact the sender
and properly
> dispose of the e-mail.
>
>
> _______________________________________________
> Mne_analysis mailing list
> Mne_analysis at nmr.mgh.harvard.edu
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/mne_analysis
>
>
> The information in this e-mail is intended only for the person to
whom it is
> addressed. If you believe this e-mail was sent to you in error and
the e-mail
> contains patient information, please contact the Partners Compliance
HelpLine at
> http://www.partners.org/complianceline . If the e-mail was sent to
you in error
> but does not contain patient information, please contact the sender
and properly
> dispose of the e-mail.

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/mne_analysis

The information in this e-mail is intended only for the person to whom
it is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you
in error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/mne_analysis

The information in this e-mail is intended only for the person to whom
it is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you
in error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/mne_analysis

The information in this e-mail is intended only for the person to whom it
is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you
in error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.

_______________________________________________
Mne_analysis mailing list
Mne_analysis at nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/mne_analysis

The information in this e-mail is intended only for the person to whom it
is
addressed. If you believe this e-mail was sent to you in error and the
e-mail
contains patient information, please contact the Partners Compliance
HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in
error
but does not contain patient information, please contact the sender and
properly
dispose of the e-mail.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.nmr.mgh.harvard.edu/pipermail/mne_analysis/attachments/20160315/e1916aa5/attachment-0001.html

Hi,

I have been thinking about the priorities of tasks, and I feel that the
visualization can done after gsoc. As the main aim is to first make
decoding more compatible with scikit-learn, we could also probably shift
i/o task for later.
Let me know what you guys think. We can discuss this here, gitter, or over
hangouts, whichever is more suitable for you guys.

Thank you
Asish Panda

Hi Ashish,

we totally agree with you. With Jean-R?mi we just set up a main project and
API proposal. See dropbox paper invitation. Let's start a private
discussion over the next days based on that draft.

Denis

Thank you for responding. I am afraid I didn't get any invitation. Could
you kindly re-send it or perhaps share a link here?

Asish Panda

Hi Ashish
can you check your spam folder? We used the email that appears here on the
mailinglist when you are writing.

Denis