预测方法对创建的模型给出误差



我为NLP问题建立了一个模型,并试图使用它进行预测。它为LSTM和RNN:都给出了一个错误

ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1621, in predict_function  *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1611, in step_function  **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1604, in run_step  **
outputs = model.predict_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1572, in predict_step
return self(x, training=False)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 213, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" '
ValueError: Exception encountered when calling layer "sequential_33" (type Sequential).

Input 0 of layer "gru_8" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 300)

Call arguments received:
• inputs=tf.Tensor(shape=(None,), dtype=int32)
• training=False
• mask=None

训练阶段很成功,我不知道自己做错了什么。我可以得到模型权重,但不能做出预测。这是下面的代码。

model = Sequential()
model.add(layers.Embedding(vocab_size, embedding_dim, input_length=maxlen))
model.add(layers.GRU(64, return_sequences=True))
model.add(layers.GlobalMaxPool1D())
model.add(layers.Dropout(0.4))
model.add(layers.Dense(8, activation='relu'))
model.add(layers.Dropout(0.4))
model.add(layers.Dense(4, activation='relu'))
model.add(layers.Dropout(0.4))
model.add(layers.Dense(3,activation='softmax'))

model_path= "sentiment labelled sentences/generic sentiment models/w4/model{epoch:04d}.hdf5"
check=ModelCheckpoint(model_path, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto',save_freq='epoch') #modeli her epoch sonunda kaydet
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.get_weights()
history = model.fit(X_train, y_train,
epochs=15,
validation_data=(X_test, y_test),
batch_size=150, callbacks=[check])
model.predict(X_test[0])

如果只传递一个样本,首先应该指定batch_size。

第二,你应该重塑你的样本以匹配输入数据的形状,例如,如果你只想传递一个大小为(224224,3(的图像,你可以在传递给模型之前重塑它,如下所示:

my_image=cv2.resize(img,(image_size))

在回答自己的问题时,我在predict方法中指定了batch_size,现在它可以工作了。

最新更新