在mlr中,可以使用嵌套交叉验证将过滤器特征选择与超参数调优一起进行,例如使用以下代码:
lrn = makeFilterWrapper(learner = "regr.kknn", fw.method = "chi.squared")
ps = makeParamSet(makeDiscreteParam("fw.abs", values = 10:13),
makeDiscreteParam("k", values = c(2, 3, 4)))
ctrl = makeTuneControlGrid()
inner = makeResampleDesc("CV", iter = 2)
outer = makeResampleDesc("Subsample", iter = 3)
lrn = makeTuneWrapper(lrn, resampling = inner, par.set = ps, control = ctrl, show.info = FALSE)
res = resample(lrn, bh.task, outer, mse, extract = getTuneResult)
但是据我所知,使用包装器特性选择是不可能做到这样的事情的,例如:
lrn = makeFeatSelWrapper(learner = "regr.kknn", ww.method = "random") # imaginary code
ps = makeParamSet(makeDiscreteParam("maxit", 15),
makeDiscreteParam("k", values = c(2, 3, 4))) # imaginary code, no method parameter & no resampling provided
ctrl = makeTuneControlGrid()
inner = makeResampleDesc("CV", iter = 2)
outer = makeResampleDesc("Subsample", iter = 3)
lrn = makeTuneWrapper(lrn, resampling = inner, par.set = ps, control = ctrl, show.info = FALSE)
res = resample(lrn, bh.task, outer, mse, extract = getTuneResult)
有办法实现这样的事情吗?特别是,为了避免嵌套交叉验证?有什么方法上的原因说明这是不合适的吗?因为实际上,使用带有调优参数(特征数量)的过滤器特征选择看起来与包装器方法非常相似,也就是说,您的附加超参数实际上是一组特定的特征,要么来自过滤器(例如;"卡方")+阈值(前90%,80%,70%)或包装算法的输出(随机,GA,穷举,顺序),在这两种情况下,最佳特征集是基于内部cv性能的。
我相信这两种方法(嵌套的过滤和嵌套的附加参数)在计算复杂性方面是相似的,但是你可能不想用嵌套的CV进一步减少你的训练数据集,这可以通过第一种方法实现。
这是我在方法论上犯的错误,还是这是缺乏(可能不是真正流行的)功能?
此功能自7月起在mlr中可用。需要安装git版本
devtools::install_github("mlr-org/mlr")
TuneWrapper
需要在内重采样环路,FeatSelWrapper
需要在外重采样环路。下面是一个使用iris.task
和rpart进行反向选择的例子:
library(mlr)
调优参数:
ps <- makeParamSet(
makeNumericParam("cp", lower = 0.01, upper = 0.1),
makeIntegerParam("minsplit", lower = 10, upper = 20)
)
网格搜索:ctrl <- makeTuneControlGrid(resolution = 5L)
指定学习者:
lrn <- makeLearner("classif.rpart", predict.type = "prob")
生成一个曲调包装器:
lrn <- makeTuneWrapper(lrn, resampling = cv3, par.set = ps, control = makeTuneControlGrid(), show.info = FALSE)
生成一个特性选择包装器:
lrn = makeFeatSelWrapper(lrn,
resampling = cv3,
control = makeFeatSelControlSequential(method = "sbs"), show.info = FALSE)
执行重新取样:
res <- resample(lrn, task = iris.task, resampling = cv3, show.info = TRUE, models = TRUE)
注意,即使这个小示例也需要一些时间
res
#output
Resample Result
Task: iris_example
Learner: classif.rpart.tuned.featsel
Aggr perf: mmce.test.mean=0.1000000
Runtime: 92.1436
不需要最外层的样本也可以做同样的事情:
lrn <- makeLearner("classif.rpart", predict.type = "prob")
lrn <- makeTuneWrapper(lrn, resampling = cv3, par.set = ps, control = makeTuneControlGrid(), show.info = TRUE)
res2 <- selectFeatures(learner = lrn , task = iris.task, resampling = cv3,
control = makeFeatSelControlSequential(method = "sbs"), show.info = TRUE)
如果我让你正确的话,你基本上是在问如何调整FeatSelWrapper
?这有点复杂,因为特征选择(在mlr
中)依赖于重新采样,因为它基本上是调优。我们不调整学习器参数,但我们调整特征的选择以优化性能度量。为了计算测量,我们需要重新采样。
但这是有争议的,如果这是必要的,因为选择功能选择通常取决于您的可用资源和其他情况。
你可以做的是对不同的特征选择方法进行基准测试:
inner = makeResampleDesc("CV", iter = 2)
outer = makeResampleDesc("Subsample", iter = 3)
settings = list(random1 = makeFeatSelControlRandom(maxit = 15), random2 = makeFeatSelControlRandom(maxit = 20))
lrns = Map(function(x, xn) {
lrn = makeFeatSelWrapper(learner = "regr.lm", control = x, resampling = inner)
lrn$id = paste0(lrn$id, ".", xn)
lrn
}, x = settings, xn = names(settings))
benchmark(lrns, bh.task, outer, list(mse, timeboth))