与SKlearn精度召回曲线计算的混淆



以下是sci-kit pr-curve计算的片段。

>>> import numpy as np
>>> from sklearn.metrics import precision_recall_curve
>>> y_true = np.array([0, 0, 1, 1])
>>> y_scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> precision, recall, thresholds = precision_recall_curve(
...     y_true, y_scores)
>>> precision  
array([ 0.66...,  0.5       ,  1.        ,  1.        ])
>>> recall
array([ 1. ,  0.5,  0.5,  0. ])
>>> thresholds
array([ 0.35,  0.4 ,  0.8 ])

怀疑:

为什么阈值只有 3,而给出的精度和召回率是 4。正如人们可以清楚地看到的那样,0.1的阈值被排除在外。计算从阈值 0.35 或更高开始。

阈值只能低到足以达到 100% 召回率。 这个想法是,您通常不会设置较低的阈值,因为它会引入不必要的误报。

https://github.com/scikit-learn/scikit-learn/blob/a24c8b46/sklearn/metrics/ranking.py

  

   # stop when full recall attained
   # and reverse the outputs so recall is decreasing
    last_ind = tps.searchsorted(tps[-1])            
    sl = slice(last_ind, None, -1)
    return np.r_[precision[sl], 1], np.r_[recall[sl], 0], thresholds[sl]

最新更新