我正在使用mnist做一个关于协作的小项目。当我得到这个恼人的错误时,我决定改变数据集。我之前也发了一篇关于这个的帖子。但问题并没有得到解决。告诉我我哪里错了。
我代码:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from tensorflow import keras
data = keras.datasets.cifar10
df = data.load_data()
(X_train, y_train),(X_test, y_test) = df
X_train_flat = X_train.reshape(-1,32*32)
X_test_flat = X_test.reshape(-1, 32*32)
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
model = Sequential([
#first hidden layer
Dense(units=10,
input_shape = (1024,),
activation='sigmoid'),
#second hidden layer
#Dense(units=10,
# input_shape = (784,),
# activation='sigmoid')
])
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
model.fit(X_test_flat, y_train, epochs=15)#im getting the error on this line
y_pred = model.predict(X_test_flat)
model.evaluate(X_test_flat, y_test)
错误是:
ValueError Traceback (most recent call last)
<ipython-input-13-5508f48e747b> in <module>
----> 1 model.fit(X_test_flat, y_train, epochs=15)
1 frames
/usr/local/lib/python3.7/dist-packages/keras/engine/data_adapter.py in _check_data_cardinality(data)
1653 for i in tf.nest.flatten(single_data)))
1654 msg += "Make sure all arrays contain the same number of samples."
-> 1655 raise ValueError(msg)
1656
1657
ValueError: Data cardinality is ambiguous:
x sizes: 30000
y sizes: 50000
Make sure all arrays contain the same number of samples.
为格式问题道歉。我不在电脑上rn
数据集重塑导致错误,您可以通过在模型定义中使用Flatten layer
来实现。
你可以简单地删除这一行:
X_train_flat = X_train.reshape(-1,32*32)
X_test_flat = X_test.reshape(-1, 32*32)
,可以在模型定义中定义具有数据集输入形状的flatten层,如下:
model = Sequential([Flatten(input_shape=(32, 32, 3)),
Dense(64, activation='relu'),
Dense(128, activation='relu'),
Dense(10,activation='softmax'),
])
现在,用X_train
代替X_test_flat
,用X_test
代替X_test_flat
进行模型训练、评价和预测。
model.fit(X_train, y_train, epochs=15)
y_pred = model.predict(X_test)
model.evaluate(X_test, y_test)
请看这里类似的分类模型。