嗨,我得到奇怪的结果为以下代码的问题张贴在这里(https://www.kaggle.com/c/titanic) -
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
from keras.layers.advanced_activations import PReLU, LeakyReLU
from keras.layers.recurrent import SimpleRNN, SimpleDeepRNN
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM, GRU
import pandas as pd
import numpy as np
from sklearn import preprocessing
np.random.seed(1919)
### Constants ###
data_folder = "/home/saj1919/Public/Data_Science_Mining_Study/submissions/titanic/data/"
out_folder = "/home/saj1919/Public/Data_Science_Mining_Study/submissions/titanic/output/"
batch_size = 4
nb_epoch = 10
### load train and test ###
train = pd.read_csv(data_folder+'train.csv', index_col=0)
test = pd.read_csv(data_folder+'test.csv', index_col=0)
print "Data Read complete"
Y = train.Survived
train.drop('Survived', axis=1, inplace=True)
columns = train.columns
test_ind = test.index
train['Age'] = train['Age'].fillna(train['Age'].mean())
test['Age'] = test['Age'].fillna(test['Age'].mean())
train['Fare'] = train['Fare'].fillna(train['Fare'].mean())
test['Fare'] = test['Fare'].fillna(test['Fare'].mean())
category_index = [0,1,2,4,5,6,8,9]
for i in category_index:
print str(i)+" : "+columns[i]
train[columns[i]] = train[columns[i]].fillna('missing')
test[columns[i]] = test[columns[i]].fillna('missing')
train = np.array(train)
test = np.array(test)
### label encode the categorical variables ###
for i in category_index:
print str(i)+" : "+str(columns[i])
lbl = preprocessing.LabelEncoder()
lbl.fit(list(train[:,i]) + list(test[:,i]))
train[:,i] = lbl.transform(train[:,i])
test[:,i] = lbl.transform(test[:,i])
### making data as numpy float ###
train = train.astype(np.float32)
test = test.astype(np.float32)
#Y = np.array(Y).astype(np.int32)
model = Sequential()
model.add(Dense(len(columns), 512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(512, 1))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer="adam")
model.fit(train, Y, nb_epoch=nb_epoch, batch_size=batch_size, validation_split=0.20)
preds = model.predict(test,batch_size=batch_size)
pred_arr = []
for pred in preds:
pred_arr.append(pred[0])
### Output Results ###
preds = pd.DataFrame({"PassengerId": test_ind, "Survived": pred_arr})
preds = preds.set_index('PassengerId')
preds.to_csv(out_folder+'test.csv')
我得到以下结果:
Train on 712 samples, validate on 179 samples
Epoch 0
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 1
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 2
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 3
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 4
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 5
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 6
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 7
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 8
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
Epoch 9
712/712 [==============================] - 0s - loss: -0.0000 - val_loss: -0.0000
我正在尝试创建一个简单的3层网络。完全是基本代码。在kaggle上使用keras之前,我已经尝试过这类分类问题。但是这次出现了这个错误。
是否由于数据较少而过拟合。我错过了什么?有人能帮忙吗?
老帖子,但无论如何都要回答,以防其他人尝试使用Keras的泰坦尼克号。
你的网络可能有太多的参数和太少的正则化(例如dropout)。
在model.compile之前调用model.summary(),它会显示你的网络有多少参数。在你的两个Dense层之间,你应该有512 X 512 = 262144个参数。这对于762个例子来说是很多的。
你也可以在最后一层使用sigmoid激活和binary_cross熵损失,因为你只有两个输出类。