RNN与GRU在喀拉拉邦



我想在python中使用Keras用GRU实现递归神经网络。我在运行代码时遇到了问题,我越来越多地更改变量,但这不起作用。你有解决问题的办法吗?

inputs = 42          #number of columns input  
num_hidden =50      #number of neurons in the layer
outputs = 1           #number of columns output  
num_epochs = 50
batch_size = 1000
learning_rate = 0.05
#train       (125973, 42)  125973 Rows and 42 Features
#Labels  (125973,1) is True Results
model = tf.contrib.keras.models.Sequential()
fv=tf.contrib.keras.layers.GRU
model.add(fv(units=42, activation='tanh', input_shape= (1000,42),return_sequences=True))  #i want to send Batches to train

#model.add(tf.keras.layers.Dropout(0.15))  # Dropout overfitting
#model.add(fv((1,42),activation='tanh', return_sequences=True))
#model.add(Dropout(0.2))  # Dropout overfitting
model.add(fv(42, activation='tanh'))
model.add(tf.keras.layers.Dropout(0.15))  # Dropout overfitting
model.add(tf.keras.layers.Dense(1000,activation='softsign'))
#model.add(tf.keras.layers.Activation("softsign"))

start = time.time()
# sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
# model.compile(loss="mse", optimizer=sgd)
model.compile(loss="mse", optimizer="Adam") 
inp = np.array(train)
oup = np.array(labels)
X_tr = inp[:batch_size].reshape(-1, batch_size, inputs)
model.fit(X_tr,labels,epochs=20, batch_size=batch_size)

然而,我得到以下错误:

ValueError: Error when checking target: expected dense to have shape (1000,) but got array with shape (1,)

这里,您已经提到输入向量形状为1000。

model.add(fv(units=42, activation='tanh', input_shape= (1000,42),return_sequences=True)) #i want to send Batches to train

然而,训练数据的形状(X_tr(是1-D检查您的X_tr变量,并为输入层设置相同的维度。

如果你仔细阅读错误,你会意识到你提供的标签形状(None, 1)和模型输出形状(None, 1):之间存在形状不匹配

ValueError: Error when checking target:  <--- This means the output shapes
expected dense to have shape (1000,)     <--- output shape of model  
but got array with shape (1,)            <--- the shape of labels you give when training

因此,您需要使它们保持一致。您只需要将最后一层中的单元数更改为1,因为每个输入样本有一个输出:

model.add(tf.keras.layers.Dense(1, activation='softsign')) # 1 unit in the output

相关内容

  • 没有找到相关文章

最新更新