sklearn将RandomizedSearchCV与自定义指标一起使用,并捕获异常



我在sklearn中使用RandomizedSearchCV函数和随机森林分类器。要查看不同的指标,我正在使用自定义评分

from sklearn.metrics import make_scorer, roc_auc_score, recall_score, matthews_corrcoef, balanced_accuracy_score, accuracy_score
acc = make_scorer(accuracy_score)
auc_score = make_scorer(roc_auc_score)
recall = make_scorer(recall_score)
mcc = make_scorer(matthews_corrcoef)
bal_acc = make_scorer(balanced_accuracy_score)
scoring = {"roc_auc_score": auc_score, "recall": recall, "MCC" : mcc, 'Bal_acc' : bal_acc, "Accuracy": acc }

这些自定义记分器用于随机化搜索

rf_random = RandomizedSearchCV(estimator=rf, param_distributions=random_grid, n_iter=100, cv=split, verbose=2,
random_state=42, n_jobs=-1, error_score=np.nan, scoring = scoring, iid = True, refit="roc_auc_score")

现在的问题是,当我使用自定义拆分时,AUC抛出了一个异常,因为这个确切的拆分只有一个类标签。

我不想更改拆分,因此是否有可能在RandomizedSearchCV或make_scorer函数中捕获这些异常?因此,例如,如果其中一个度量没有计算(由于异常(,只需输入NaN,然后继续下一个模型。

编辑:显然,error_score不包括模型训练,但不包括度量计算。如果我使用例如Accuracy,一切都正常,我只会在只有一个类标签的折叠中得到警告。如果我使用例如AUC作为度量,我仍然会抛出异常。

如果能在这里得到一些想法就太好了!

解决方案:定义一个例外的自定义记分器:

def custom_scorer(y_true, y_pred, actual_scorer):
score = np.nan
try:
score = actual_scorer(y_true, y_pred)
except ValueError: 
pass
return score

这导致了一个新的指标:

acc = make_scorer(accuracy_score)
recall = make_scorer(custom_scorer, actual_scorer=recall_score)
new_auc = make_scorer(custom_scorer, actual_scorer=roc_auc_score)
mcc = make_scorer(custom_scorer, actual_scorer=matthews_corrcoef)
bal_acc = make_scorer(custom_scorer,actual_scorer=balanced_accuracy_score)
scoring = {"roc_auc_score": new_auc, "recall": recall, "MCC" : mcc, 'Bal_acc' : bal_acc, "Accuracy": acc }

这反过来可以传递到RandomizedSearchCV 的评分参数

我发现的第二个解决方案是:

def custom_auc(clf, X, y_true):
score = np.nan
y_pred = clf.predict_proba(X)
try:
score = roc_auc_score(y_true, y_pred[:, 1])
except Exception:
pass
return score

也可以传递给评分论点:

scoring = {"roc_auc_score": custom_auc, "recall": recall, "MCC" : mcc, 'Bal_acc' : bal_acc, "Accuracy": acc }

(改编自此答案(

您可以有一个通用得分手,它可以将其他得分手作为输入,检查结果,捕捉他们抛出的任何异常,并返回一个固定值。

def custom_scorer(y_true, y_pred, actual_scorer):
score = np.nan
try:
score = actual_scorer(y_true, y_pred)
except Exception: 
pass
return score

然后你可以用来调用它

acc = make_scorer(custom_scorer, actual_scorer = accuracy_score)
auc_score = make_scorer(custom_scorer, actual_scorer = roc_auc_score, 
needs_threshold=True) # <== Added this to get correct roc
recall = make_scorer(custom_scorer, actual_scorer = recall_score)
mcc = make_scorer(custom_scorer, actual_scorer = matthews_corrcoef)
bal_acc = make_scorer(custom_scorer, actual_scorer = balanced_accuracy_score)

要复制的示例:

import numpy as np
def custom_scorer(y_true, y_pred, actual_scorer):
score = np.nan
try:
score = actual_scorer(y_true, y_pred)
except Exception: 
pass
return score

from sklearn.metrics import make_scorer, roc_auc_score, accuracy_score
acc = make_scorer(custom_scorer, actual_scorer = accuracy_score)
auc_score = make_scorer(custom_scorer, actual_scorer = roc_auc_score, 
needs_threshold=True) # <== Added this to get correct roc
from sklearn.datasets import load_iris
X, y = load_iris().data, load_iris().target
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV, KFold
cvv = KFold(3)
params={'criterion':['gini', 'entropy']}
gc = GridSearchCV(DecisionTreeClassifier(), param_grid=params, cv =cvv, 
scoring={"roc_auc": auc_score, "accuracy": acc}, 
refit="roc_auc", n_jobs=-1, 
return_train_score = True, iid=False)
gc.fit(X, y)
print(gc.cv_results_)

最新更新