如何改进我的文本主题分类器



我正在开发一个文本主题分类器,可以标记句子或小问题。到目前为止,它可以标记大约30个已知主题。

效果很好,但它开始将类似的问题相互混淆。

例如,这 3 个标签:

1( 标签 - backup_proxy_intranet:如何为内网应用设置备份代理?...以及 140 个包含"内联网应用备份代理"的类似问题......

2( 标签 - smartphone_intranet:如何在智能手机中使用内联网应用程序?和...以及 140 个包含"我的智能手机中的内联网应用程序"的类似问题......

3( 标签 - ticket_intranet:如何将工单与内网应用关联?...以及 140 个包含"使用内联网应用程序订购工单"的类似问题......

训练后,这 3 个始终返回标签backup_proxy_intranet。我该怎么做才能将它们分开?

series = series.dropna()
series = shuffle(series)
X_stemmed = []
for x_t in series['phrase']:
    stemmed_text = [stemmer.stem(i) for i in word_tokenize(x_t)]
    X_stemmed.append(' '.join(stemmed_text))
x_normalized = []
for x_t in X_stemmed:
    temp_corpus=x_t.split(' ')
    corpus=[token for token in temp_corpus if token not in stops]
    x_normalized.append(' '.join(corpus))
X_train,X_test,y_train,y_test = train_test_split(x_normalized,series['target'],random_state=0,test_size=0.20)
vect = CountVectorizer(ngram_range=(1,3)).fit(X_train)
X_train_vectorized = vect.transform(X_train)
sampler = SMOTE()
model = make_pipeline(sampler, LogisticRegression())
print()
print("-->Model: ")
print(model)
print()
print("-->Training... ")
model.fit(X_train_vectorized,y_train)
filename = '/var/www/html/python/intraope_bot/lib/textTopicClassifier.model'
pickle.dump(model,open(filename, 'wb'))
filename2 = '/var/www/html/python/intraope_bot/lib/textTopicClassifier.vector'
pickle.dump(vect,open(filename2, 'wb'))

此致敬意!

我想你可能想使用sklearn的TfidfVectorizer:它应该可以帮助你提高分数!

>>> from sklearn.feature_extraction.text import TfidfVectorizer
>>> corpus = [
...     "Label - backup_proxy_intranet: How to set up a backup proxy for intranet app? ... and 140 similar questions containing 'backup proxy for intranet app'"
...     Label - smartphone_intranet: How to use intranet app in my smartphone? and ... and 140 similar questions containing 'intranet app in my smartphone'...
... ]
>>> vectorizer = TfidfVectorizer()
>>> X = vectorizer.fit_transform(corpus)
>>> print(vectorizer.get_feature_names())

我也从scikit learn中使用多项式朴素贝叶斯获得了最佳结果

相关内容

  • 没有找到相关文章

最新更新