我需要在h2o.gbm函数中使用相同的超参数进行复制运行,以获得不同的结果。
尽管我已经创建了一个循环,为每个配置提供两次运行,并且这个h2o-gbm模型运行的结果是通过使用h2o.performance函数提取的;我刚刚意识到,每两次跑步都有完全相同的结果。
对于运行两个具有相同超参数的h2o.gbm模型会产生不同的结果,你有什么建议吗?
我尝试过的东西:
- h2o.shutdown和h2o.init在不同的nthread下进行了试验
- h2o.gbm内部的seed参数已更改并删除
- 删除score_tree_interval和stopping_round参数
所有这些尝试都失败了,使用相同超参数的两次运行给出了完全相同的结果。此外,我正在分享一个示例超参数配置,我希望通过运行两次来获得不同的结果。
h2o.gbm(x = x_col_names, y = y,
training_frame = train_h2o,
fold_column = "index_4seasons",
ntrees = 1000,
max_depth = 5,
learn_rate = 0.1,
stopping_rounds = 5,
score_tree_interval = 10,
seed = 1)
如有任何帮助和意见,我们将不胜感激。
种子值将略微改变结果。请参阅下面的演示,说明在使用文档中的示例时MSE
会发生更改。
# Import the prostate dataset into H2O:
train_h2o = h2o.import_file("http://s3.amazonaws.com/h2o-public-test-data/smalldata/prostate/prostate.csv")
# Set the predictors and response; set the factors:
train_h2o["CAPSULE"] = train_h2o["CAPSULE"].asfactor()
x_col_names = ["ID","AGE","RACE","DPROS","DCAPS","PSA","VOL","GLEASON"]
y = "CAPSULE"
# Build and train first model:
pros_gbm1 = H2OGradientBoostingEstimator(
nfolds = 5, ntrees = 1000, max_depth = 5, learn_rate = 0.1,
stopping_rounds = 5, score_tree_interval = 10, seed = 1)
pros_gbm1.train(x = x_col_names, y = y,
training_frame = train_h2o)
# Build and train the second model with only seed number changed:
pros_gbm2 = H2OGradientBoostingEstimator(
nfolds = 5, ntrees = 1000, max_depth = 5, learn_rate = 0.1,
stopping_rounds = 5, score_tree_interval = 10, seed = 123456789)
pros_gbm2.train(x = x_col_names, y = y,
training_frame = train_h2o)
print('Model 1 MSE:', pros_gbm1.mse())
print('Model 2 MSE:', pros_gbm2.mse())
输出
Model 1 MSE: 0.020725291770552916
Model 2 MSE: 0.02189654172905499
如果您的数据集在不同的种子和硬件设置下给出了可重复的结果,可能是因为它不够大或不够复杂,无法使模型随机运行。您也可以尝试更改fold_column
中的折叠,看看这是否会产生影响。