使用DataFrames建模数据



我正在尝试训练数据集,以预测输入的文本是否来自科幻小说。我对Python是相对较新的,所以我不知道我在做什么错。

代码:

#class17.py
"""
Created on Fri Nov 17 14:07:36 2017
@author: twaters
Read three science fiction novels
Predict a sentence or paragraph
see whether sentence/phrase/book is from a science fiction novel or not
"""
import nltk
import pandas as pd
import csv
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn import model_selection
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from nltk.corpus import stopwords
#nltk.download()

irobot = "C:/Users/twaters/Desktop/Assignments/SQL/Python/DA Project/irobot.txt"
enders_game = "C:/Users/twaters/Desktop/Assignments/SQL/Python/DA Project/endersgame.txt"
space_odyssey ="C:/Users/twaters/Desktop/Assignments/SQL/Python/DA Project/spaceodyssey.txt"
to_kill_a_mockingbird = "C:/Users/twaters/Desktop/Assignments/SQL/Python/DA Project/tokillamockingbird.txt"
sr = set(stopwords.words('english'))
freq = {}
def main():
    #read_novels()
    model_novels()

def read_novel(b, is_scifi):
    read_file = open(b)
    text = read_file.read()
    words = text.split()
    clean_tokens = words[:]
    filtered_list = []
    for word in clean_tokens:
        word = word.lower()
        if word not in sr:
            filtered_list.append(word)
    freq = nltk.FreqDist(clean_tokens)
    #print(filtered_list)
    for word in clean_tokens:
       count = freq.get(word,0)
       freq[word] = count + 1

    frequency_list = freq.keys()
    with open('C:/Users/twaters/Desktop/Assignments/SQL/Python/DA Project/novels_data.txt', 'w', encoding='utf-8') as csvfile:
        fieldnames = ['word','frequency','is_scifi']
        writer = csv.DictWriter(csvfile, fieldnames=fieldnames, lineterminator = 'n')
        writer.writeheader()
        for words in frequency_list:
            writer.writerow({'word': words,'frequency': freq[words],'is_scifi':is_scifi})
    print("List compiled.")
def read_novels(): 
    read_novel(enders_game, 0)
    read_novel(space_odyssey, 0)
    read_novel(irobot, 0)
    read_novel(to_kill_a_mockingbird, 1)
def model_novels():
    df = pd.read_csv('C:/Users/twaters/Desktop/Assignments/SQL/Python/DA Project/novels_data.txt', 'rb', delimiter='t', encoding='utf-8')
    print(df)
    #for index in range(2, df.shape[0], 100):
    df_subset = df.loc[1:]
    #print(df_subset)
    X = df_subset.loc[:, 'frequency':'is_scifi']
    Y = df_subset.loc[:, 'frequency':'is_scifi']
    testing_size = 0.2
    seed = 7
    X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=testing_size, random_state=seed)
    selectedModel = LogisticRegression()
    selectedModel.fit(X_train, Y_train)  
    predictions = selectedModel.predict(X_validation)
#%%
#print("Accuracy Score:n", accuracy_score(Y_validation, predictions))
#print("Confusion Matrix:n",confusion_matrix(predictions, Y_validation))
#print("Class report:n", classification_report(Y_validation, predictions))
#df_test = pd.read_csv('C:/Users/twaters/Desktop/Assignments/SQL/Python/DA Project/novels_data.txt', delimiter='t')
#predictions_test = selectedModel.predict(df_test)
#test_frame = pd.DataFrame(predictions_test)
#test_frame.to_csv('C:/Users/twaters/Desktop/Assignments/SQL/Python/DA Project/novels_data_result.txt', sep='t')

错误:Trackback(最近的最新通话):

文件",第1行,在 main()

file" c:/user/user/user/desktop/sistments/sql/python/da project/class17.py",第36行,在main中 model_novels()

file" c:/user/user/user/desktop/sistments/sql/python/da project/class17.py",第95行,在model_novels中 selectionmodel.fit(x_train,y_train)

文件" d: program文件(x86) anaconda lib lib site-packages sklearn linear_model logister.py",第1216行, 订单=" C")

file" d: program文件(x86) anaconda lib lib site-packages sklearn sklearn utils utils vility.py",第573行,在check_x_y中 suse_min_features,warn_on_dtype,估算器)

file" d: program文件(x86) anaconda lib lib site-packages sklearn sklearn utils utils vility.py",第453行,在check_array中 _ASSERT_ALL_FINITE(数组)

文件" d: program文件(x86) anaconda lib lib site-packages sklearn sklearn utils utils vility.py",第44行,在_assert_all_finite中 "或一个太大的价值,对于%r。"%x.dtype)

valueerror:输入包含nan,无穷大或值太大的dtype('float64')。

如果您需要访问我正在阅读的文件,我可以链接它们。

谢谢您的帮助!

这是堆栈中的点,您应该注意:

file" c:/user/user/user/desktop/sistments/sql/python/da project/class17.py",第95行,在model_novels selectionmodel.fit(x_train,y_train)

file" d: program文件(x86) anaconda lib lib site-packages sklearn sklearn utils utils utils veration.py",第44行,in _assert_all_finite中的第44行或一个太大的值。%x.dtype)

告诉我们格式化X的问题有问题,因此逻辑回归将接受它。

您应该检查X_Train和X以查看它们是否包含错误的值。

这个答案将为您提供一些有关如何做到这一点的指示。

python pandas:检查dataframe中的任何值

是否为nan

最新更新