我使用逻辑回归训练了一个模型来预测名称字段和描述字段是否属于男性、女性或品牌的配置文件。我的火车准确率约为99%,而我的测试准确率约为83%。我尝试通过调整 C 参数来实现正则化,但几乎没有注意到改进。我的训练集中有大约 5,000 个示例。这是我只需要更多数据的情况,还是我可以在Sci-Kit Learn中做其他事情来提高我的测试准确性?
过度拟合是一个多方面的问题。 这可能是您的训练/测试/验证拆分(从 50/40/10 到 90/9/1 的任何内容都可能改变事情)。 您可能需要随机播放输入。 尝试集成方法,或减少要素数量。 您可能会有异常值抛弃东西
再说一次,它可能不是这些,或者所有这些,或者这些的某种组合。
对于初学者,尝试将测试分数绘制为测试拆分大小的函数,看看你得到了什么
#The 'C' value in Logistic Regresion works very similar as the Support
#Vector Machine (SVM) algorithm, when I use SVM I like to use #Gridsearch
#to find the best posible fit values for 'C' and 'gamma',
#maybe this can give you some light:
# For SVC You can remove the gamma and kernel keys
# param_grid = {'C': [0.1,1, 10, 100, 1000],
# 'gamma': [1,0.1,0.01,0.001,0.0001],
# 'kernel': ['rbf']}
param_grid = {'C': [0.1,1, 10, 100, 1000]}
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report,confusion_matrix
# Train and fit your model to see initial values
X_train, X_test, y_train, y_test = train_test_split(df_feat, np.ravel(df_target), test_size=0.30, random_state=101)
model = SVC()
model.fit(X_train,y_train)
predictions = model.predict(X_test)
print(confusion_matrix(y_test,predictions))
print(classification_report(y_test,predictions))
# Find the best 'C' value
grid = GridSearchCV(SVC(),param_grid,refit=True,verbose=3)
grid.best_params_
c_val = grid.best_estimator_.C
#Then you can re-run predictions on this grid object just like you would with a normal model.
grid_predictions = grid.predict(X_test)
# use the best 'C' value found by GridSearch and reload your LogisticRegression module
logmodel = LogisticRegression(C=c_val)
logmodel.fit(X_train,y_train)
print(confusion_matrix(y_test,grid_predictions))
print(classification_report(y_test,grid_predictions))