如何在python scikit-learn中更改精度和召回率的阈值



我听到有人说你可以调整阈值来调整精度和召回率之间的权衡,但我找不到如何做到这一点的实际例子。

我的代码:

for i in mass[k]:
    df = df_temp # reset df before each loop
    #$$
    #$$ 
    if 1==1:
    ###if i == singleEthnic:
        count+=1
        ethnicity_tar = str(i) # fr, en, ir, sc, others, ab, rus, ch, it, jp
        # fn, metis, inuit; algonquian, iroquoian, athapaskan, wakashan, siouan, salish, tsimshian, kootenay
        ############################################
        ############################################
        def ethnicity_target(row):
            try:
                if row[ethnicity_var] == ethnicity_tar:
                    return 1
                else:
                    return 0
            except: return None
        df['ethnicity_scan'] = df.apply(ethnicity_target, axis=1)
        print '1=', ethnicity_tar
        print '0=', 'non-'+ethnicity_tar
        # Random sampling a smaller dataframe for debugging
        rows = df.sample(n=subsample_size, random_state=seed) # Seed gives fixed randomness
        df = DataFrame(rows)
        print 'Class count:'
        print df['ethnicity_scan'].value_counts()
        # Assign X and y variables
        X = df.raw_name.values
        X2 = df.name.values
        X3 = df.gender.values
        X4 = df.location.values
        y = df.ethnicity_scan.values
        # Feature extraction functions
        def feature_full_name(nameString):
            try:
                full_name = nameString
                if len(full_name) > 1: # not accept name with only 1 character
                    return full_name
                else: return '?'
            except: return '?'
        def feature_full_last_name(nameString):
            try:
                last_name = nameString.rsplit(None, 1)[-1]
                if len(last_name) > 1: # not accept name with only 1 character
                    return last_name
                else: return '?'
            except: return '?'
        def feature_full_first_name(nameString):
            try:
                first_name = nameString.rsplit(' ', 1)[0]
                if len(first_name) > 1: # not accept name with only 1 character
                    return first_name
                else: return '?'
            except: return '?'
        # Transform format of X variables, and spit out a numpy array for all features
        my_dict = [{'last-name': feature_full_last_name(i)} for i in X]
        my_dict5 = [{'first-name': feature_full_first_name(i)} for i in X]
        all_dict = []
        for i in range(0, len(my_dict)):
            temp_dict = dict(
                my_dict[i].items() + my_dict5[i].items()
                )
            all_dict.append(temp_dict)
        newX = dv.fit_transform(all_dict)
        # Separate the training and testing data sets
        X_train, X_test, y_train, y_test = cross_validation.train_test_split(newX, y, test_size=testTrainSplit)
        # Fitting X and y into model, using training data
        classifierUsed2.fit(X_train, y_train)
        # Making predictions using trained data
        y_train_predictions = classifierUsed2.predict(X_train)
        y_test_predictions = classifierUsed2.predict(X_test)

我尝试更换线路"y_test_predictions = classifierUsed2.predict(X_test)" with "y_test_predictions = classifierUsed2.predict(X_test) > 0.8""y_test_predictions = classifierUsed2.predict(X_test) > 0.01",没有任何太大变化。

classifierUsed2.predict(X_test)只输出每个样本的预测类(很可能是 0 和 1)。你想要的是classifierUsed2.predict_proba(X_test)输出一个 2d 数组,其中包含每个样本的每个类的概率。要执行阈值设置,您可以执行以下操作:

y_test_probabilities = classifierUsed2.predict_proba(X_test)
# y_test_probabilities has shape = [n_samples, n_classes]
y_test_predictions_high_precision = y_test_probabilities[:,1] > 0.8
y_test_predictions_high_recall = y_test_probabilities[:,1] > 0.1

y_test_predictions_high_precision将包含相当确定为1类的样本,而y_test_predictions_high_recall将更频繁地预测1类(并实现更高的召回率),但也将包含许多误报。

您使用的两个分类器(逻辑回归和 SVM)都支持predict_proba

相关内容

  • 没有找到相关文章

最新更新