Scikit仅根据网格分数了解RFECV的功能数量



从scikit learn RFE文档中,算法依次选择较小的特征集,并且只保留具有最高权重的特征。具有低权重的特征将被丢弃,并且该过程将重复进行,直到剩余特征的数量与用户指定的数量匹配(或者默认情况下取为原始特征数量的一半)。

RFECV文件表明,这些特征与RFE和KFCV进行了排名。

我们在RFECV:的文档示例中显示的代码中有一组25个功能

from sklearn.svm import SVC
from sklearn.cross_validation import StratifiedKFold
from sklearn.feature_selection import RFECV,RFE
from sklearn.datasets import make_classification
# Build a classification task using 3 informative features
X, y = make_classification(n_samples=1000, n_features=25, n_informative=3,
                           n_redundant=2, n_repeated=0, n_classes=8,
                           n_clusters_per_class=1, random_state=0)
# Create the RFE object and compute a cross-validated score.
svc = SVC(kernel="linear")
# The "accuracy" scoring is proportional to the number of correct
# classifications
rfecv = RFECV(estimator=svc, step=1, cv=StratifiedKFold(y, 2),scoring='accuracy')
rfecv.fit(X, y)
rfe = RFE(estimator=svc, step=1)
rfe.fit(X, y)
print('Original number of features is %s' % X.shape[1])
print("RFE final number of features : %d" % rfe.n_features_)
print("RFECV final number of features : %d" % rfecv.n_features_)
print('')
import numpy as np
g_scores = rfecv.grid_scores_
indices = np.argsort(g_scores)[::-1]
print('Printing RFECV results:')
for f in range(X.shape[1]):
    print("%d. Number of features: %d;
                  Grid_Score: %f" % (f + 1, indices[f]+1, g_scores[indices[f]]))

这是我得到的输出:

Original number of features is 25
RFE final number of features : 12
RFECV final number of features : 3
Printing RFECV results:
1. Number of features: 3; Grid_Score: 0.818041
2. Number of features: 4; Grid_Score: 0.816065
3. Number of features: 5; Grid_Score: 0.816053
4. Number of features: 6; Grid_Score: 0.799107
5. Number of features: 7; Grid_Score: 0.797047
6. Number of features: 8; Grid_Score: 0.783034
7. Number of features: 10; Grid_Score: 0.783022
8. Number of features: 9; Grid_Score: 0.781992
9. Number of features: 11; Grid_Score: 0.778028
10. Number of features: 12; Grid_Score: 0.774052
11. Number of features: 14; Grid_Score: 0.762015
12. Number of features: 13; Grid_Score: 0.760075
13. Number of features: 15; Grid_Score: 0.752003
14. Number of features: 16; Grid_Score: 0.750015
15. Number of features: 18; Grid_Score: 0.750003
16. Number of features: 22; Grid_Score: 0.748039
17. Number of features: 17; Grid_Score: 0.746003
18. Number of features: 19; Grid_Score: 0.739105
19. Number of features: 20; Grid_Score: 0.739021
20. Number of features: 21; Grid_Score: 0.738003
21. Number of features: 23; Grid_Score: 0.729068
22. Number of features: 25; Grid_Score: 0.725056
23. Number of features: 24; Grid_Score: 0.725044
24. Number of features: 2; Grid_Score: 0.506952
25. Number of features: 1; Grid_Score: 0.272896

在这个特定的例子中:

  1. 对于RFE:代码总是返回12个功能(大约是文档中预期的25个功能的一半)
  2. 对于RFECV,代码返回1-25之间的不同数字(不是特征数量的一半)

在我看来,当选择RFECV时,只根据KFCV分数来选择特征的数量,即交叉验证分数高于RFE对特征的连续修剪。

这是真的吗?如果要使用原生递归特征消除算法,那么RFECV是使用该算法还是使用其混合版本?

在RFECV中,是否对修剪后剩余的特征子集进行交叉验证?如果是,在RFECV中每次修剪后会保留多少个特征?

在交叉验证版本中,每个步骤都会对功能进行重新排序,并删除排名最低的功能——这在文档中被称为"递归功能选择"。

如果你想将其与原始版本进行比较,你需要计算RFE选择的特征的交叉验证分数。我的猜测是RFECV的答案是正确的——从特征减少时模型性能的急剧增加来看,你可能有一些高度相关的特征,这些特征正在损害你的模型的性能。

最新更新