我想开始尝试神经网络,发现keras语法非常简单。我的设置是X_train是一个形状为(3516,6)的数组且y_train的形状为(3516,)
X_train看起来像这样:
[[ 888. 900.5 855. 879.311 877.00266667
893.5008 ]
[ 875. 878.5 840. 880.026 874.56933333
890.7948 ]
[ 860. 870. 839.5 880.746 870.54333333
887.6428 ]....]
它是6个财务数据的输入来预测一个输出。我知道它不太准确但它至少能让我在学习rnn之前了解一些东西我的问题是每个epoch的损失函数显示为nan,准确度显示为0%,validation_accuracy显示为0%好像说数据甚至没有通过模型,我的意思是即使它是一个输入很差的模型即使应该用大损失来表示,对吧?模型如下:(见下文)
无论如何,伙计们,我确信我做错了什么,真的很感谢你们的输入非常感谢S
编辑:完整工作代码:
def load_data(keyword):
df = pd.read_csv('%s_x.csv' %keyword)
df2 = pd.read_csv('%s_y.csv' %keyword)
df2 = df2['label']
try:
df.drop('Unnamed: 0', axis = 1, inplace=True)
except:
print('wouldnt let drop unnamed column')
X = df.as_matrix()
y = df2.as_matrix()
X_len = len(X)
test_size = 0.2
test_split = int(test_size * X_len)
X_train = X[:-test_split]
y_train = y[:-test_split]
X_test = X[-test_split:]
y_test = y[-test_split:]
def keras():
model = Sequential( [
Dense(input_dim=3, output_dim=3),
Dense(output_dim=60, activation='linear'),
core.Dropout(p=0.1),
Dense(60, activation='linear'),
core.Dropout(p=0.1),
Dense(1, activation='linear')
])
return model
def training(epoch):
# start the program off by loading some data into it
X_train, X_test, y_train, y_test = load_data('admiral')
y_train = y_train.reshape(len(y_train), 1)
y_test = y_test.reshape(len(y_test), 1)
model = keras()
# optimizer will go into the compile function
# RMSpop is apparently a pretty decent choice for recurrent neural networks although we will start it on a simple nn too.
rms = optimizers.RMSprop(lr=0.001, rho = 0.9, epsilon =1e-08)
model.compile(optimizer= rms, loss='mean_squared_error ', metrics = ['accuracy'])
model.fit(X_train, y_train, nb_epoch=epoch, batch_size =500, validation_split=0.01)
score = model.evaluate(X_test, y_test, batch_size=50)
print(score)
training(300)
准确率真的很低,因为显示准确率没有意义,对于回归问题,它更适合分类
数据正在通过它是如此之低,它是一个Nan问题回答