如何使用单个新句子测试XGB模型



我们训练了具有两个列的XGB模型。总结有文本。Security_flag具有0和1。测试和培训效果很好。现在,我们要添加一个新句子(未包含在原始文件中(。只要我们仅使用原始文件中的已知单词,它仍然有效。但是,如果我们使用一个完整的新单词,我们会收到一个错误消息。

这一切都起作用 - 只有最后一个代码行抛出错误

请告知谢谢

我们试图以不同的方式输入新句子。

import matplotlib.pyplot as plt
from xgboost import plot_tree
import xgboost as xgb
import pandas as pd
import numpy as np
import pickle
import string
import nltk
import csv
import os
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.datasets import dump_svmlight_file
from sklearn.metrics import precision_score
from sklearn.externals import joblib
from sklearn.metrics import confusion_matrix
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
def pp(text):
    # tokenize into words
    # remove stopwords
    stop = stopwords.words('german')
    tokens = [word for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)]
    tokens = [token for token in tokens if token not in stop]
    # remove words less than three letters
    tokens = [word for word in tokens if len(word) >= 3]
    # lower capitalization
    tokens = [word.lower() for word in tokens]
    # lemmatize
    lmtzr = nltk.WordNetLemmatizer()
    tokens = [lmtzr.lemmatize(word) for word in tokens]
    preprocessed_text= ' '.join(tokens)
    return preprocessed_text
df = pd.read_csv("file03.csv", sep=",", usecols=["Security_Flag","Summary"])
y = df["Security_Flag"]
# from dataframe to array for train test splitting
y = y.values
Z = []
for row in df['Summary']:
    l = pp(row)
    Z.append(l)
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(Z)
X = X.toarray()
#X = pd.DataFrame(data=X[0:,0:])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=41)
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)
param = {
'max_depth': 3, # the maximum depth of each tree
'eta': 0.3, # the training step for each iteration
'silent': 1, # logging mode - quiet
'objective': 'multi:softprob', # error evaluation for multiclass training
# 'objective': 'binary:logistic', # error evaluation for multiclass training
'num_class': 2} # the number of classes that exist in this datset
num_round = 20 # the number of training iterations
bst = xgb.train(param, dtrain, num_round)
preds = bst.predict(dtest)
best_preds = np.asarray([np.argmax(line) for line in preds])
stest = xgb.DMatrix([X_test[0]])
spred = bst.predict(stest)
print(confusion_matrix(y_test, best_preds))
while True:
    ts = input("Enter a sentence: ")
    ts = pp(ts)
    Z.append(ts)
    Y = vectorizer.fit_transform(Z)
    Y = Y.toarray()
    test = xgb.DMatrix([Y[-1]])
    spred = bst.predict(test)

"预期的结果将是一个或零。输出是一个错误消息。"培训数据没有以下字段:F1354,F1355,F1352,F1353

您必须尝试以下内容:

Y = vectorizer.transform(Z)

因为您已经在开始时已经进行了Fit_transform((。

最新更新