GridSearchCV y应该是一个1d数组,得到了一个形状为(54000,10)的数组



我一直在尝试用MLPmnist数据集进行网格搜索。由于mnist数据集被标记为从0到9,并且我在输出中有10个神经元,因此我使用one-hot encoding。但是一旦我尝试运行网格搜索,我得到以下错误:y should be a 1d array, got an array of shape (54000, 10) instead。当我尝试使用y.argmax(axis=1)将标签用作argmax时,我得到另一个错误:ValueError: Shapes (10, 1) and (10, 10) are incompatible。我该如何克服这个问题?
我还需要提一下,我是数据科学和神经网络的新手。提前感谢

import tensorflow as tf
from sklearn.experimental import enable_halving_search_cv
from sklearn.model_selection import HalvingGridSearchCV
from tensorflow.keras import Sequential
from tensorflow.keras import optimizers
from tensorflow.keras.layers import Dense
from scikeras.wrappers import KerasClassifier
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical
X_train = X_train.reshape((X_train.shape[0], 28 * 28)) / 255.0
Y_train = to_categorical(Y_train)
def create_MLP(hlayer_count=1, hlayer_1_size=64, hlayer_2_size=64,
activation="relu", optimizer="adam", learning_rate=0.01):
print(hlayer_count)
model = Sequential()
# first hidden layer
model.add(Dense(hlayer_1_size, activation=activation, input_shape=(28 * 28,)))
if hlayer_count == 2:
# second hidden layer
model.add(Dense(hlayer_2_size, activation=activation))
# output layer
model.add(Dense(10, activation="softmax"))
# compile model
metrics = ["accuracy"]
if optimizer == "adam":
o = optimizers.Adam(learning_rate=learning_rate)
elif optimizer == "sgd":
o = optimizers.SGD(learning_rate=learning_rate, momentum=0.9)
else:
o = optimizers.RMSprop(learning_rate=learning_rate)
model.compile(loss="categorical_crossentropy",
optimizer=o,
metrics=metrics)
return model
grid_model = KerasClassifier(model=create_MLP, epochs=30, verbose=4)
batch_size = [10, 20, 40, 80]
hlayer_size = [32, 64, 96, 128]
activation = ["relu", "tanh", "sigmoid"]
optimizer = ["adam", "sgd", "rmsprop"]
learning_rate = [0.001, 0.01, 0.05, 0.1]
dict1 = dict(batch_size=batch_size,
model__hlayer_count=[1],
model__hlayer_1_size=hlayer_size,
model__activation=activation,
model__optimizer=optimizer,
model__learning_rate=learning_rate)
dict2 = dict(batch_size=batch_size,
model__hlayer_count=[2],
model__hlayer_1_size=hlayer_size,
model__hlayer_2_size=hlayer_size,
model__activation=activation,
model__optimizer=optimizer,
model__learning_rate=learning_rate)
grid = HalvingGridSearchCV(estimator=grid_model, param_grid=[dict1, dict2], n_jobs=-1, cv=3, error_score="raise")
grid_results = grid.fit(X_train, Y_train)

使用tf.keras.losses.SparseCategoricalCrossentropy()作为损失函数可以避免此错误

这个错误似乎是因为tensorflow模型使用分类交叉熵损失函数,需要一个hot编码的目标变量,而scikit-learnGridSearchCV需要目标变量的1D数组

X_train = X_train.reshape((X_train.shape[0], 28 * 28)) / 255.0
# Y_train = to_categorical(Y_train)
# Here keep the Y_train as 1D array containing all the different classes
# When you compile the model use Sparse categorical crossentropy
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(),
optimizer=o,
metrics=metrics)

这里是使用稀疏分类熵损失函数的tensorflow教程

https://www.tensorflow.org/guide/keras/train_and_evaluate

最新更新