Deep Neural Network Accuracy Doesn't Change

  • MNE-Python version: 0.21.0
  • operating system: Windows 10

I am trying to create a Deep Neural Network using LSTM but I get always the same accuracy. I am using EEG data from a fif file with up to 9 motor tasks and 60 channels. When I first run it with data from 60 patients, 9 motor tasks, and 60 channels I get a maximum accuracy of 0.4828 and it doesn’t go up. I tried reducing the motor tasks to 5 and 3 and still got the same accuracy. I tried reducing from 60 channels to 9 channels and still got the same accuracy. I tried using data from just one patient and still got the same accuracy. I tried filtering data by frequency but still I got the same accuracy. The model reaches that accuracy really quickly and doesn’t go up even after many epochs (the maximum I tried were 5000 epochs). All what I have done was meant to change the accuracy but it didn’t.

I think I am feeding the wrong input to the model. The duration of an event is 4 seconds and a single run is 2 minutes (with multiple tasks). Can anybody help me with this?

events, _ = events_from_annotations(raw)

picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
                   exclude='bads')

epoch = Epochs(raw, events, tmax = 4, proj=True, picks = picks, preload=True)
epoch.filter(7.5,12)

X = epoch.get_data()
trX = X.transpose(0,2,1)
y = epoch.events[:, 2]
classLabels=to_categorical(y-1)
n_outputs=classLabels.shape[1]

from tensorflow import keras
#from sklearn.utils import compute_class_weight
sgd = keras.optimizers.SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
#classWeight = compute_class_weight('balanced', numpy.unique(y), y) 

model = Sequential()
model.add(LSTM(10,input_shape=(trX.shape[1],trX.shape[2])))
model.add(Dropout(0.25))
model.add(Dense(10, activation='relu'))
model.add(Dense(n_outputs, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# fit network
model.summary()

batch_size = 64
epochs = 1000
verbose = 1

fitted=model.fit(trX, classLabels, epochs=epochs, batch_size=batch_size, verbose=verbose)

This is what I got when I train the model:

Epoch 1/1000
3/3 [==============================] - 0s 113ms/step - loss: 1.6093 - accuracy: 0.3966
Epoch 2/1000
3/3 [==============================] - 0s 112ms/step - loss: 1.6087 - accuracy: 0.4828
Epoch 3/1000
3/3 [==============================] - 0s 112ms/step - loss: 1.6077 - accuracy: 0.4828
Epoch 4/1000
3/3 [==============================] - 0s 108ms/step - loss: 1.6064 - accuracy: 0.4828
Epoch 5/1000
3/3 [==============================] - 0s 107ms/step - loss: 1.6048 - accuracy: 0.4828
Epoch 6/1000
3/3 [==============================] - 0s 104ms/step - loss: 1.6030 - accuracy: 0.4828
Epoch 7/1000
3/3 [==============================] - 0s 97ms/step - loss: 1.6012 - accuracy: 0.4828
Epoch 8/1000
3/3 [==============================] - 0s 95ms/step - loss: 1.5994 - accuracy: 0.4828
Epoch 9/1000
3/3 [==============================] - 0s 101ms/step - loss: 1.5979 - accuracy: 0.4828
Epoch 10/1000
3/3 [==============================] - 0s 98ms/step - loss: 1.5959 - accuracy: 0.4828

Hey @Mario unfortunately I cannot replicate your example because:

  1. some code is missing
  2. I don’t have the data

I suggest you share a complete minimal working example and use openly available data, so that others can try to help you debugging the issue. From just looking at it, I don’t know what the problem is.

If you solved the problem in the meantime, please reply about what was wrong and mark your reply as the solution to this thread. Thanks!

Hi @sappelhoff,

Thank you for your reply. I didn’t find any solutions for that but what I did was using an open source code that uses CNN to extract spatial information and LSTM to extract temporal information. I have attached the repository below.