scikit-learn中使用SVC分类器的错误预测



我生成了自己的语料库,所以我分成了一个训练文本文件,如下所示:

POS|This film was awesome, highly recommended
NEG|I did not like this film
NEU|I went to the movies
POS|this film is very interesting, i liked a lot
NEG|the film was very boring i did not like it
NEU|the cinema is big
NEU|the cinema was dark

为了进行测试,我还有另一个未标记的文本评论:

I did not like this film

然后我执行以下操作:

import pandas as pd
from sklearn.feature_extraction.text import HashingVectorizer
trainingdata = pd.read_csv('/Users/user/Desktop/training.txt',
                 header=None, sep='|', names=['labels', 'movies_reviews'])

vect = HashingVectorizer(analyzer='word', ngram_range=(2,2), lowercase=True, n_features=7)
X = vect.fit_transform(trainingdata['movies_reviews'])
y = trainingdata['labels']
TestText= pd.read_csv('/Users/user/Desktop/testing.txt',
                     header=None, names=['test_opinions'])
test = vect.transform(TestText['test_opinions'])
from sklearn.svm import SVC
svm = SVC()
svm.fit(X, y)
prediction = svm.predict(test)
print prediction

预测是:

['NEU']

然后我想到的是为什么这个预测是错误的?这是代码问题还是特征或分类算法问题?,我试图玩这个,当我从训练文本文件中删除最后的评论时,我意识到它总是在预测该文件的最后一个元素。知道如何解决这个问题的任何想法?

SVM 对参数设置非常敏感。您需要执行网格搜索以查找正确的值。我尝试在您的数据集上训练两种朴素贝叶斯,并在训练集上获得了完美的准确性:

from sklearn.naive_bayes import *
from sklearn.feature_extraction.text import *
# first option- Gaussian NB
vect = HashingVectorizer(analyzer='word', ngram_range=(2,2), lowercase=True)
X = vect.fit_transform(trainingdata['movies_reviews'])
y = trainingdata['labels']
nb = GaussianNB().fit(X.A,y) # input needs to be dense
nb.predict(X.A) == y
# second option- MultinomialNB (input needs to be positive, use CountingVect instead)
vect = CountVectorizer(analyzer='word', ngram_range=(2,2), lowercase=True)
X = vect.fit_transform(trainingdata['movies_reviews'])
y = trainingdata['labels']
nb = MultinomialNB().fit(X,y)
nb.predict(X.A) == y

在这两种情况下,输出都是

Out[33]: 
0    True
1    True
2    True
3    True
4    True
5    True
6    True
Name: labels, dtype: bool

相关内容

  • 没有找到相关文章

最新更新