在scikit学习中使用逻辑回归的交叉验证找到最佳的Lasso/L1正则化强度



对于我的逻辑回归模型,我想使用交叉验证(例如:5 倍(来评估最佳 L1 正则化强度,而不是单个测试训练集,如下所示在我的代码中:

from sklearn.model_selection import train_test_split
train_x, test_x, train_y, test_y = train_test_split(X_scaled,y, stratify=y, test_size=0.3, 
random_state=2)
#Evaluate L1 regularization strengths for reducing features in final model 
C = [10, 1, .1, 0.05,.01,.001] # As C decreases, more coefficients go to zero
for c in C:
clf = LogisticRegression(penalty='l1', C=c, solver='liblinear', class_weight="balanced")
clf.fit(train_x, train_y)
pred_y=clf.predict(test_x) 
print("Model performance with Inverse Regularization Parameteter, C = 1/λ VALUE: ", c)
cr=metrics.classification_report(test_y, pred_y)
print(cr)
print('')

有人可以告诉我如何使用交叉验证(即,不复制上述代码 5 次和不同的随机状态(在 5 个不同的测试训练集上做到这一点吗?

实际上,classification_report作为指标并没有被定义为sklearn.model_selection.cross_val_score内部的评分指标。因此,我将在以下代码中使用f1_micro

from sklearn.model_selection import cross_val_score
#Evaluate L1 regularization strengths for reducing features in final model 
C = [10, 1, .1, 0.05,.01,.001] # As C decreases, more coefficients go to zero
for c in C:
clf = LogisticRegression(penalty='l1', C=c, solver='liblinear', class_weight="balanced")
# using data before splitting (X_scaled) and (y)
scores = cross_val_score(clf, X_scaled, y, cv=5, scoring="f1_micro")  #<-- add this
print(scores)  #<-- add this

现在scores变量是一个包含五个值的列表,表示分类器在原始数据的五个不同拆分中的f1_micro值。

如果要在sklearn.model_selection.cross_val_score中使用其他评分指标,可以使用以下命令获取所有可用的评分指标:

print(metrics.SCORERS.keys())

此外,还可以使用多个评分指标;以下指标同时使用f1_microf1_macro

from sklearn.model_selection import cross_validate
cross_validate(clf, X_scaled, y, cv=5, scoring=["f1_micro", "f1_macro"])  

最新更新