如何在交叉验证时获得 AUC-ROC 而不是准确性



我正在对数据集执行分类,并且正在使用交叉验证进行建模。交叉验证给出了每个折叠的准确性,因为类是不平衡的,准确性不是正确的度量。我想获得AUC-ROC而不是准确性。

cross_val_score支持大量评分选项。

这里提到了详尽的列表。

["准确性"、"recall_samples"、"f1_macro"、"adjusted_rand_score"、" "recall_weighted"、"precision_weighted"、"recall_macro"、" "homogeneity_score"、"neg_mean_squared_log_error"、"recall_micro"、" "f1"、"neg_log_loss"、"roc_auc"、"average_precision"、"f1_weighted"、" "R2"、"precision_macro"、"explained_variance"、"v_measure_score"、" "neg_mean_absolute_error"、"completeness_score", "fowlkes_mallows_score"、"f1_micro"、"precision_samples"、" "mutual_info_score"、"neg_mean_squared_error"、"balanced_accuracy", "neg_median_absolute_error"、"precision_micro"、" "normalized_mutual_info_score"、"adjusted_mutual_info_score", "精度"、"f1_samples"、"brier_score_loss"、"召回"]

下面是一个示例来展示如何使用auc_roc

>>> from sklearn import datasets, linear_model
>>> from sklearn.model_selection import cross_val_score
>>> import numpy as np
>>> X, y = datasets.load_breast_cancer(return_X_y=True)
>>> model = linear_model.SGDClassifier(max_iter=50, random_state=7)
>>> print(cross_val_score(model, X, y, cv=5, scoring = 'roc_auc'))  
[0.96382429 0.96996124 0.95573441 0.96646546 0.91113347]

最新更新