我知道sklearn有很好的方法来获得交叉验证分数:
from sklearn.model_selection import cross_val_score
clf = svm.SVC(kernel='linear', C=1)
scores = cross_val_score(clf, iris.data, iris.target, cv=5)
scores
我想知道特定训练和测试集的分数:
train_list = [train1, train2, train3] # train1,2,3 is the training data sets
test_list = [test1, test2, test3] # # test1,2,3 is the test data sets
clf = svm.SVC(kernel='linear', C=1)
scores = some_nice_method(clf, train_list, test_list)
在python中是否有这样的方法给出特定分离数据集的分数?
这就是两行代码:
for tr, te in zip(train_list, test_list):
svm.SVC(kernel='linear', C=1).train(X[tr, :], y[tr]).score(X[te, :], y[te])
参见sklearn.svn.SVC.score
:
score(X, y, sample_weight=None)
返回给定测试数据和标签的平均精度。
我的建议是像下面这样使用kfold交叉验证。在这种情况下,您将获得特定分割的训练、测试索引以及准确性分数。在新版的Sklearn中,有一些变化。
from sklearn import svm
from sklearn import datasets
from sklearn.model_selection import KFold
from sklearn.metrics import accuracy_score
iris = datasets.load_iris()
X = iris.data
y = iris.target
clf = svm.SVC(kernel='linear', C=1)
kf = KFold(n_splits=5)
for train_index, test_index in kf.split(range(len(X))):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
score = accuracy_score(y_test, y_pred)
print score