如何使用keras调谐器超调谐输入形状?



我正在尝试根据timesteps的不同值超调LSTM模型的输入形状。然而,我面临着一个问题。在初始化模型时,选择timesteps的默认值(2),因此创建形状为(4096, 2, 64)build_model.scaled_train。因此,初始化时input_shape的值为(2, 64)。当训练开始时,任意选择timesteps的值为16,则build_model.scaled_train的形状为(512, 16, 64)。这意味着input_shape现在的值为(16, 64)。然而,这并没有反映在模型中。InputLayer在初始化时保持(2, 64)的形状。因此,一个错误-Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 16, 64), found shape=(None, 2, 64).

def build_model(hp):
timesteps = hp.Choice('timesteps', [2, 4, 8, 16], ordered = False)
....
DFS, UFS = get_data_in_shape(DF, UF, timesteps)
build_model.scaled_train, build_model.train_label = train_test_splitting(DFS, UFS)
model = keras.Sequential()
model.add(InputLayer(input_shape = (timesteps, nosamples))
...
...
return model

class MyTuner(BayesianOptimization):
def run_trial(self, trial, *args, **kwargs):
kwargs['batch_size'] = trial.hyperparameters.Choice('batch_size', [32, 64, 128, 256])
return super(MyTuner, self).run_trial(trial, *args, **kwargs)
tuner = MyTuner(
build_model, 
objective ='val_loss', 
max_trials = 20,
overwrite = True,
directory = '/content/drive/MyDrive/Colab Notebooks', 
project_name = 'bo4')

当我开始超参数调优时,会发生这种情况。

tuner.search(build_model.scaled_train, build_model.train_label, validation_split = 0.2, epochs = 100, callbacks = [early_stopping])

错误——

Search: Running Trial #1
Value             |Best Value So Far |Hyperparameter
16                |?                 |timesteps
4                 |?                 |layers
1024              |?                 |unitsLSTM
0.15              |?                 |rate
64                |?                 |unitsANN
0.001             |?                 |learning_rate
Epoch 1/100
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-76-10e3851dd45f> in <module>()
----> 1 tuner.search(build_model.scaled_train, build_model.train_label, validation_split = 0.2, epochs = 100, callbacks = [early_stopping]) #,  model_checkpoint
6 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145           except Exception as e:  # pylint:disable=broad-except
1146             if hasattr(e, "ag_error_metadata"):
-> 1147               raise e.ag_error_metadata.to_exception(e)
1148             else:
1149               raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function  *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function  **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step  **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 264, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" is '
ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 16, 64), found shape=(None, 2, 64)

我想我在某个地方犯了一个逻辑错误,但找不到它。请帮助。

我做了一些更改,如下所示,它工作得很好。但我不知道这是否是最优解决方案。

def build_model(hp):
...
...
scaled_train, train_label = train_test_splitting(DFS, UFS)
...
...
return model, scaled_train, train_label
class MyTuner(BayesianOptimization):
def run_trial(self, trial, *args, **kwargs):
hp = trial.hyperparameters
model, scaled_train, train_label = self.hypermodel.build(hp)
kwargs['batch_size'] = trial.hyperparameters.Choice('batch_size', [32, 64, 128, 256])
return self.hypermodel.fit(hp, model, scaled_traintrain, train_label, *args, **kwargs)

最新更新