使用mlr
和parallelMap
以及在mlr.tuneParams
级别并行化时,是否可以查看调整轮次的结果?
当我调谐串行时,我在 CV 结束时在控制台中看到每个超参数组合的结果(超参数、度量(。因此,如果我在 tuneParams 结果保存之前终止作业,我仍然有一些结果。
当我并行调优时,我不知道如何在作业终止的结果中看到中间结果。是否可以创建显示结果的日志文件?
谢谢!
这在parallelMap中是不可能的。在后台,调用不允许工作线程进度输出的mclapply()
(多核(或clusterMap()
(套接字(。
您可能想尝试 mlr3,它依赖于未来的包进行并行化。 有了这个,您可以选择不同的并行后端,这可能有助于实现您想要的。
library("mlr")
#> Loading required package: ParamHelpers
library("parallelMap")
discrete_ps <- makeParamSet(
makeDiscreteParam("C", values = c(0.5, 1.0, 1.5, 2.0)),
makeDiscreteParam("sigma", values = c(0.5, 1.0, 1.5, 2.0))
)
ctrl <- makeTuneControlRandom(maxit = 5)
rdesc <- makeResampleDesc("CV", iters = 2L)
# socket mode ------------------------------------------------------------------
parallelStartSocket(2, level = "mlr.tuneParams")
#> Starting parallelization in mode=socket with cpus=2.
res <- tuneParams("classif.ksvm",
task = iris.task, resampling = rdesc,
par.set = discrete_ps, control = ctrl, show.info = TRUE
)
#> [Tune] Started tuning learner classif.ksvm for parameter set:
#> Type len Def Constr Req Tunable Trafo
#> C discrete - - 0.5,1,1.5,2 - TRUE -
#> sigma discrete - - 0.5,1,1.5,2 - TRUE -
#> With control class: TuneControlRandom
#> Imputation value: 1
#> Exporting objects to slaves for mode socket: .mlr.slave.options
#> Mapping in parallel: mode = socket; level = mlr.tuneParams; cpus = 2; elements = 5.
#> [Tune] Result: C=2; sigma=0.5 : mmce.test.mean=0.0600000
parallelStop()
#> Stopped parallelization. All cleaned up.
# sequential -------------------------------------------------------------------
res <- tuneParams("classif.ksvm",
task = iris.task, resampling = rdesc,
par.set = discrete_ps, control = ctrl, show.info = TRUE
)
#> [Tune] Started tuning learner classif.ksvm for parameter set:
#> Type len Def Constr Req Tunable Trafo
#> C discrete - - 0.5,1,1.5,2 - TRUE -
#> sigma discrete - - 0.5,1,1.5,2 - TRUE -
#> With control class: TuneControlRandom
#> Imputation value: 1
#> [Tune-x] 1: C=1.5; sigma=1.5
#> [Tune-y] 1: mmce.test.mean=0.0466667; time: 0.0 min
#> [Tune-x] 2: C=0.5; sigma=1.5
#> [Tune-y] 2: mmce.test.mean=0.0600000; time: 0.0 min
#> [Tune-x] 3: C=0.5; sigma=1.5
#> [Tune-y] 3: mmce.test.mean=0.0600000; time: 0.0 min
#> [Tune-x] 4: C=1; sigma=2
#> [Tune-y] 4: mmce.test.mean=0.0466667; time: 0.0 min
#> [Tune-x] 5: C=1; sigma=2
#> [Tune-y] 5: mmce.test.mean=0.0466667; time: 0.0 min
#> [Tune] Result: C=1; sigma=2 : mmce.test.mean=0.0466667
# multicore --------------------------------------------------------------------
parallelStartMulticore(2, level = "mlr.tuneParams")
#> Starting parallelization in mode=multicore with cpus=2.
res <- tuneParams("classif.ksvm",
task = iris.task, resampling = rdesc,
par.set = discrete_ps, control = ctrl, show.info = TRUE
)
#> [Tune] Started tuning learner classif.ksvm for parameter set:
#> Type len Def Constr Req Tunable Trafo
#> C discrete - - 0.5,1,1.5,2 - TRUE -
#> sigma discrete - - 0.5,1,1.5,2 - TRUE -
#> With control class: TuneControlRandom
#> Imputation value: 1
#> Mapping in parallel: mode = multicore; level = mlr.tuneParams; cpus = 2; elements = 5.
#> [Tune] Result: C=2; sigma=1.5 : mmce.test.mean=0.0466667
parallelStop()
#> Stopped parallelization. All cleaned up.
创建于 2019-12-26 由 reprex 软件包 (v0.3.0(