I'm trying to train a model to distinguish between two kinds of time signals, those with RTS noise, and those with only white noise.
I have a simple 1D CNN that works well (92% accuracy) with one training set, but turns into a complete coin flip with another. To the eye these sets are very similar. One was created using real signals, the other with simulated signals. The only real difference to me is the mean magnitude. Is there a reason the model fails so reliably with this second set? Do I need to normalize the data somehow?
from keras.models import Sequential
from keras.layers import Dense, Dropout,Activation
from keras.layers import Embedding
from keras.layers import Conv1D, GlobalAveragePooling1D, MaxPooling1D,
Flatten, LSTM
import numpy as np
x_test = np.load('C:/Users/Ben WORK ONLY/Desktop/GH repos/RTS ML detect
beta/x_test.npy')
x_train = np.load('C:/Users/Ben WORK ONLY/Desktop/GH repos/RTS ML detect
beta/x_train.npy')
y_test = np.load('C:/Users/Ben WORK ONLY/Desktop/GH repos/RTS ML detect
beta/y_test.npy')
y_train = np.load('C:/Users/Ben WORK ONLY/Desktop/GH repos/RTS ML detect
beta/y_train.npy')
X_train = np.expand_dims(x_train, axis=2)
X_test = np.expand_dims(x_test, axis=2)
model = Sequential()
model.add(Conv1D(32, 12, activation='relu', input_shape=(1500, 1)))
model.add(MaxPooling1D(3))
model.add(Conv1D(64, 12, activation='relu'))
model.add(MaxPooling1D(3))
model.add(Conv1D(128, 12, activation='relu'))
model.add(GlobalAveragePooling1D())
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=16, epochs=5)
score = model.evaluate(X_test, y_test, batch_size=16)
#model.save('C:/Users/Ben WORK ONLY/Desktop/GH repos/RTS ML detect
beta/CNNlin_model.h5')