model.fit给出ValueError:检查输入时出错:预期的conv2d得到了shape()的数组



大家好,我在使用model.fit((训练模型时得到ValueError。。我尝试了很多方法来解决它,但都没有成功。看一看。。然而,我确实将所有图像调整为(512512(

................
................
................
def resizing(image, label):
image = tf.image.resize(image, (512, 512))/255.0
return image, label
mapped_training_set = train_set.map(resizing)
mapped_testing_set = test_set.map(resizing)
mapped_valid_set = valid_set.map(resizing)
tf.keras.layers.Conv2D(32, (3, 3), input_shape=(512, 512, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
.........
.........
.........
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation="relu"),
tf.keras.layers.Dense(101, activation="softmax")

model.compile(optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
hist = model.fit(mapped_training_set,
epochs=10,
validation_data=mapped_valid_set,
)

**我收到这个错误:**

<ipython-input-31-1d134652773c> in <module>()
1 hist = model.fit(mapped_training_set,
2                  epochs=10,
----> 3                  validation_data=mapped_valid_set,
4                  )
16 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)
235       except Exception as e:  # pylint:disable=broad-except
236         if hasattr(e, 'ag_error_metadata'):
--> 237           raise e.ag_error_metadata.to_exception(e)
238         else:
239           raise
ValueError: in converted code:
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py:677 map_fn
batch_size=None)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py:2410 _standardize_tensors
exception_prefix='input')
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_utils.py:573 standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected conv2d_32_input to have 4 dimensions, but got array with shape (512, 512, 3)

我试图搜索以修复错误,现在已经超过2个小时了,我没有找到答案。。

我发现的所有结果和解决方案都不在我的主题中。

请帮帮我,我被困在这里了。

提前感谢

您需要向模型传递(batch_size, height, width, channels)的输入形状。这就是为什么它说它期望4个维度。相反,您传递给它的是(512, 512, 3)的单个图像。

如果你想在单个图像上训练你的模型,你应该通过image = tf.expand_dims(image, axis=0)改变每个图像的形状。这可以在resize函数中完成。

如果你想批量训练你的模型,你应该在map之后添加mapped_training_set = mapped_training_set.batch(batch_size)。其他两个数据集也是如此。

最新更新