我试图使用GridSearchCV在CNN中找到最佳参数,但当我试图找到学习率和批量大小的最佳组合时,代码不起作用(如果我使用的不是学习率,而是历元(。知道为什么它不起作用吗?
# NEURAL NETWORK
# ======================================================================================================================
# region Now we build and train the CNN=================================================================================
vgg16_model = keras.applications.vgg16.VGG16() # We import VGG16 model to copy f rom it the structure
def create_model():
model = Sequential() # We create our model with Sequential
for layer in vgg16_model.layers: # For each layer of VGG16 we add the same layer to our model
model.add(layer)
model.layers.pop() # We remove the last layer to change it to what we need
for layers in model.layers: # We make the layers comming from VGG16 not trainables
layers.trainable = False
model.add(Dense(2, activation='softmax')) # We add the last layer to have only 2 outputs: Cracked, Uncracked
opt = Adam(lr=lrn_rate)
model.compile(optimizer = opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model,epochs=2, verbose=0)
# ====================================================================================
# define the grid search parameters
batch_size = [16, 32]
# epochs = [2,5]
lr=[0.1,0.2]
param_grid = dict(batch_size=batch_size, learn_rate=lr)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1, cv=3)
X, Y = train_batches.next() # Batch of images to be analyzed
#grid_result = grid.fit_generator(train_imgs, train_labels )
grid_result = grid.fit(X,Y)
我得到的错误是"ValueError:learn_rate不是一个合法参数",但我这样做了,就像我发现的一个例子一样,它适用于时代,但不适用于学习率。
尝试:def create_model(lrn_rate(:
另一个答案是正确的,但不能解释。您需要在create_model()
函数中提供学习率,因此您的固定函数如下所示:
def create_model(lrn_rate):
model = Sequential() # We create our model with Sequential
for layer in vgg16_model.layers: # For each layer of VGG16 we add the same layer to our model
model.add(layer)
model.layers.pop() # We remove the last layer to change it to what we need
for layers in model.layers: # We make the layers comming from VGG16 not trainables
layers.trainable = False
model.add(Dense(2, activation='softmax')) # We add the last layer to have only 2 outputs: Cracked, Uncracked
opt = Adam(lr=lrn_rate)
model.compile(optimizer = opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
尽管如此,这是非常有约束性的。你确定你永远不想测试其他优化器吗?如果你想测试其他优化器,这可能会派上用场:
def create_model(lrn_rate, optimizer_="Adam"):
model = Sequential() # We create our model with Sequential
for layer in vgg16_model.layers: # For each layer of VGG16 we add the same layer to our model
model.add(layer)
model.layers.pop() # We remove the last layer to change it to what we need
for layers in model.layers: # We make the layers comming from VGG16 not trainables
layers.trainable = False
model.add(Dense(2, activation='softmax')) # We add the last layer to have only 2 outputs: Cracked, Uncracked
if optimizer=="Adam":
opt = Adam(lr=lrn_rate)
elif optimizer=="SGD":
opt = keras.optimizers.SGD(learning_rate=lrn_rate)
else opt = keras.optimizers.Adadelta(learning_rate=lrn_rate) # need some default
# just an example of optimizers you might want to try
model.compile(optimizer = opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model