Predict_classes() keras model ValueError: layer sequential_1



好吧,我是新来的,我正试图对交通标志图像进行分类,我实际上是在构建Keras模型后,从Kaggle跟踪笔记本

from keras.models import Sequential
from keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu', input_shape=X_train.shape[1:]))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.25))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(43, activation='softmax'))
#Compilation of the model
model.compile(
loss='categorical_crossentropy', 
optimizer='adam', 
metrics=['accuracy']
)

现在对于拟合模型后的测试部分,我发现的代码如下

y_test=pd.read_csv("C:/Users/Louay/input/test.csv")
labels=y_test['Path'].to_numpy()
y_test=y_test['ClassId'].values
data=[]
for f in labels:
image=cv2.imread('C:/Users/Louay/input/Test/'+f.replace('Test/', ''))
image_from_array = Image.fromarray(image, 'RGB')
size_image = image_from_array.resize((height, width))
data.append(np.array(size_image))
X_test=np.array(data)
X_test = X_test.astype('float32')/255 
pred = model.predict_classes(X_test)

这是有效的,它正确地预测了测试集中所有图像的类别我的问题是当我试图从测试集中预测1个图像时好的,我想我会重复相同的图像处理部分,然后使用predict_classes()所以我的代码应该是这样的

image=cv2.imread('C:/Users/Louay/input/Test/00000.png')
image_from_array = Image.fromarray(image, 'RGB')
size_image = image_from_array.resize((height, width))
test=np.array(size_image)
pred = model.predict_classes(test.astype('float32')/255)

好了,我正在处理一张图像所以我认为我不需要data[]列表我附加了所有处理过的图像但是当我运行代码时,我得到了这个错误

ValueError: Input 0 of layer sequential_1 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, 3]

我知道我做错了什么,但在纠正我的代码之前,我真的想了解为什么我得到这个错误是什么导致它,到底发生了什么?

如果data_format='channels_first', Conv2D期望的4+D张量为形状:batch_shape + (channels, rows, cols);如果data_format='channels_last', Conv2D期望的4+D张量为形状:batch_shape + (rows, cols, channels)

使用tf.expand_dims为输入添加额外的维度

工作样例代码

import tensorflow as tf
image = tf.zeros([10,10,3])
input_shape = tf.expand_dims(image, axis=0).shape.as_list()
x = tf.random.normal(input_shape)
y = tf.keras.layers.Conv2D(
2, 3, activation='relu', input_shape=input_shape[1:])(x)
print(y.shape)

输出
(1, 8, 8, 2)

相关内容

  • 没有找到相关文章

最新更新