我正在进行异常检测。我可以用自动编码器做到这一点。现在我正在对autoencoder和mlp进行组合,以比较结果,为此我创建了以下代码:
encoding_dim=58
input_dim=xtrain.shape[1]# The input layer must have an equal number of neurons as the number of columns which is 116
############################
#Define the DAE architecture
############################
inputArray=Input(shape=(input_dim,)) #definir le type d'input : <tf.Tensor 'input_1:0' shape=(?, 29) dtype=float32>
encoded= Dense(units=encoding_dim,activation="tanh")(inputArray) # units ; nember of neurons for the layer , relu =max(0,x)
encoded= Dense(units=29,activation="tanh")(encoded)
encoded= Dense(units=15,activation="tanh")(encoded)
encoded= Dense(units=10,activation="tanh")(encoded)
encoded= Dense(units=3,activation="tanh")(encoded)
encoded= Dense(units=10,activation="tanh")(encoded)
decoded= Dense(units=15,activation="tanh")(encoded)
decoded= Dense(units=29,activation="tanh")(decoded)
decoded= Dense(units=encoding_dim,activation="tanh")(decoded)
decoded= Dense(units=input_dim,activation="softmax",name='decoded')(decoded) #softmax return a vector of probabilty for each class
############################
#Define the MLP architecture
############################
output_mlp = 70
first_input = Input(shape=(input_dim, ))
mlp = Dense(40, )(first_input)
mlp = Dense(80, input_dim=60, activation='relu')(mlp)
dropout_mlp = Dropout(0.1)(mlp)
mlp =Dense(70, input_dim=80, activation='relu')(dropout_mlp)
dropout_mlp =Dropout(0.1)(mlp)
mlp =Dense(30, input_dim=output_mlp, activation='relu',name='mlp')(dropout_mlp)
#dropout_mlp =Dropout(0.1)(mlp)
#mlp =Dense(10, input_dim=30, activation='relu')(dropout_mlp)
#dropout_mlp =Dropout(0.1)(mlp)
############################
#Define the concatenate layer
############################
merge_layer = concatenate([mlp, decoded])
############################
#Define the rest of layers
############################
third_layer =Dense(input_dim+ output_mlp, input_dim=input_dim+ output_mlp, activation='relu')(merge_layer)
dropout_mlp =Dropout(0.1)(third_layer)
third_layer =Dense(40, input_dim=70, activation='relu')(dropout_mlp)
dropout_mlp =Dropout(0.1)(third_layer)
third_layer =Dense(5, input_dim=40, activation='relu')(dropout_mlp)
third_layer = Dense(1, activation='sigmoid')(third_layer)
############################
#Compile and plot the model
############################
autoecoder = Model(inputs=[first_input, inputArray], outputs=third_layer)
autoecoder.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
plot_model(autoecoder,to_file='demo.png',show_shapes=True)
为了训练模型,我有以下代码:
#hyperparametrs :
batchsize=100
epoch=10
start_time = time.time()
autoecoder.fit([xtrain,xtrain],xtrain,
batch_size=batchsize,
epochs=epoch,
verbose=1,
shuffle=True,
validation_data=([xtest,xtest],xtest),
callbacks=[TensorBoard(log_dir="../logs/autoencoderHoussem")])
但是我有一个错误:
ValueError:检查目标时出错:应为dense_35形状(1,(,但得到具有形状(116,(的阵列
有人能帮忙吗?
在这一行(定义了模型中的最后一层(中:
third_layer = Dense(1, activation='sigmoid')(third_layer)
你说模型应该输出1个值。但是这里:
autoecoder.fit(x = [xtrain,xtrain], y = xtrain,
...,
)
您给您的模型一个由116个值(extrin的维度(组成的数组作为期望值。这将生成ValueError,因为由于大小不匹配,它无法计算损失。