我正在研究sklearn多项式朴素贝叶斯分类器来对20NewsGroup数据进行分类。代码如下:
import numpy as np
import operator
from sklearn import datasets, naive_bayes, metrics, feature_extraction
data_train = datasets.fetch_20newsgroups(subset = 'train', shuffle = True, random_state = 2016, remove = ('headers', 'footers', 'quotes'))
data_test = datasets.fetch_20newsgroups(subset = 'test', shuffle = True, random_state = 2016, remove = ('headers', 'footers', 'quotes'))
categories = data_train.target_names
target_map = {}
for i in range(len(categories)):
if 'comp.' in categories[i]:
target_map[i] = 0
elif 'rec.' in categories[i]:
target_map[i] = 1
elif 'sci.' in categories[i]:
target_map[i] = 2
elif 'misc.forsale' in categories[i]:
target_map[i] = 3
elif 'talk.politics' in categories[i]:
target_map[i] = 4
else:
target_map[i] = 5
y_temp = data_train.target
y_train = []
for y in y_temp:
y_train.append(target_map[y])
y_temp = data_test.target
y_test = []
for y in y_temp:
y_test.append(target_map[y])
count_vectorizer = feature_extraction.text.CountVectorizer(min_df = 0.01, max_df = 0.5, stop_words = 'english')
x_train = count_vectorizer.fit_transform(data_train.data)
x_test = count_vectorizer.transform(data_test.data)
feature_names= count_vectorizer.get_feature_names()
mnb_alpha_001 = naive_bayes.MultinomialNB(alpha = 0.01)
mnb_alpha_001.fit(x_train, y_train)
y_pred_001 = mnb_alpha_001.predict(x_test)
print('Accuracy Of MNB With Alpha = 0.01 : ', metrics.accuracy_score(y_test,y_pred_001))
上面的代码可以正常工作,可以执行分类。此外,我想列出每个类别(类别 0 - 类别 5(中最容易区分的 10 个词,这些单词将类别与其他类别分开。
如果我只有 2 个类别(类别 0 - 类别 1(,我可以使用feature_log_prob_来比较对数概率,如下所示:
diff = mnb_alpha_001.feature_log_prob_[1,:] - mnb_alpha_001.feature_log_prob_[0,:]
name_diff = {}
for i in range(len(feature_names)):
name_diff[feature_names[i]] = diff[i]
names_diff_sorted = sorted(name_diff.items(), key = operator.itemgetter(1), reverse = True)
for i in range(10):
print(names_diff_sorted[i])
上面的代码将列出类别 10 中最容易区分的单词,将其与类别 0 区分开来。问题是,如果我有超过 2 个类别,我不能只对对数概率进行简单的减法。
需要您的专家建议如何执行此任务,以便我在每个类别中获得 10 个最容易区分的单词?
谢谢。
acc=[]
i=0
rr=[0.001,0.01,0.1,1,10]
for alp in [0,1,2,3,4]:
mnb = naive_bayes.MultinomialNB(alpha = alp)
mnb.fit(x_train, y_train)
y_pred = mnb.predict(x_test)
print('accuracy of Multinomial Naive Bayes for alpha ',rr[alp],'=', metrics.accuracy_score(y_test, y_pred))
acc.append(metrics.accuracy_score(y_test, y_pred))
import operator
pos,m = max(enumerate(acc), key=operator.itemgetter(1))
print("Max accuracy=",m," for alpha=",rr[pos])
for ss in [0,1,2,3,4,5]:
mnb = naive_bayes.MultinomialNB(alpha = rr[pos])
mnb.fit(x_train, y_train)
y_pred = mnb.predict(x_test)
acc[alp]=metrics.accuracy_score(y_test, y_pred)
feature_names = count_vectorizer.get_feature_names()
diff = mnb.feature_log_prob_[ss,:] - np.max(mnb.feature_log_prob_[-ss:])
name_diff = {}
for i in range(len(feature_names)):
name_diff[feature_names[i]] = diff[i]
names_diff_sorted = sorted(name_diff.items(), key = op.itemgetter(1), reverse = True)
for i in range(10):
print(ss,names_diff_sorted[i])