Thank you all for your further advice. It has taken me a while, but I think I now understand your comments and what my options are.
Iāve been stuck for a while thinking that the way to capture EEG data from the NeuroPrax system is to examine the NeuroPrax data protocol and, based on this, to somehow tell MNE-realtime what data format to expect, since this approach had worked for my low-level Python āsocketā code. But then I realised that MNE-realtime expects to receive data in the LSL format (or in one of several other specific formats) and that this means not only writing code to receive data in the NeuroPrax format, but to resend the data in the LSL (or other) format so that MNE-realtime can receive the resent data in a format that it can already understand. Instead of āreceive ā processā, I need to āreceive ā process ā resend ā receiveā.
@rtgugg, what a great resource the NeuroPrax2LSL program is! I found your code easy to follow and can see how it matches up with the NeuroPrax protocol whose details I have (I think the name of one parameter may have changed, as the NeuroPrax documentation that I have calls the sampling frequency āsmpFreqā instead of āfsā). I donāt have the NeuroPrax hardware available yet to test with, but I think the NeuroPrax2LSL program might work for me with almost no changes. I can see that using NeuroPrax2LSL is consistent with the advice given by @mainakjas that āyouād still have to write your own code to convert the NeuroConn data into LSL formatā. As @teonbrooks has said, Iād be using a āNeuroPrax2LSL and LSLClient comboā.
One thing I havenāt fully understood yet is how to run two programs at once ā NeuroPrax2LSL for capturing and resending the EEG data and the other program that Iāve written to run my game, which will use MNE-realtime to capture and process the resent EEG data while also controlling the game itself. When I was experimenting with āsocketā code, I found that I could run a server and client at the same time from separate command lines. I suppose I could take a similar approach for with the NeuroPrax2LSL program and my game program. These two programs are a lot more demanding on processor resources than my server and client programs were, but NeuroPrax2LSL uses a separate thread for most of its work.
The game program alone is going to be pretty demanding on processor resources because it has to both capture and process the EEG data and keep the biofeedback game running. The biofeedback game element is a VR experience, so this will require a lot of resources. Do you think Iād need to use threading to make this work?
There has also been an interesting development. Iāve been in contact with NeuroConn and they have told me something that might simplify things further, if I understand them correctly. Theyāve said that their āData Serverā, whose data format is the one Iāve been examining and on which NeuroPrax2LSL is based, is not the only interface that they offer for capturing EEG data from the NeuroPrax system. An alternative, in their words, is āto use the amplifier only with a small box which sends out LSL-dataā. This would give ādirect access to LSLā. Iām waiting for further explanation, but it sounds as if the NeuroPrax system might be able to send a stream of EEG data that is already formatted for LSL. If so, this would mean that I could actually skip the NeuroPrax to LSL conversion step. I think the disadvantage of this option is that there is a higher latency before the data are sent, but for our biofeedback application we donāt need a low latency. Have you come across this NeuroPrax interface at all?
Sorry for the expanding number of questions, but your advice is so helpful in giving me an understanding of how to approach this project.