我正在进行一个关于作者归属的小型NLP项目:我有两位作者的一些文本,我想说是谁写的。
我有一些预处理的文本(标记,位置标记,等等),我想把它加载到sciki学习中。
文件的形状是:
Testo - SPN Testo testare+v+indic+pres+nil+1+sing testo+n+m+sing O
: - XPS colon colon+punc O
" - XPO " quotation_mark+punc O
Buongiorno - I buongiorno buongiorno+inter buongiorno+n+m+_ O
a - E a a+prep O
tutti - PP tutto tutto+adj+m+plur+pst+ind tutto+pron+_+m+_+plur+ind O
. <eos> XPS full_stop full_stop+punc O
Ci - PP pro loc+pron+loc+_+3+_+clit pro+pron+accdat+_+1+plur+clit O
sarebbe - VI essere essere+v+cond+pres+nil+2+sing O
molto - B molto molto+adj+m+sing+pst+ind
因此,它是一个由6列(单词、句尾标记、词性、引理、形态信息和命名实体识别标记)组成的标签分离文本文件。
每个文件都代表一个要分类的文档。
为scikit学习塑造它们的最佳方式是什么?
他们在scikit学习示例中使用的结构https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html#此处描述http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_files.html
更换此
# Load some categories from the training set
if opts.all_categories:
categories = None
else:
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
if opts.filtered:
remove = ('headers', 'footers', 'quotes')
else:
remove = ()
print("Loading 20 newsgroups dataset for categories:")
print(categories if categories else "all")
data_train = fetch_20newsgroups(subset='train', categories=categories,
shuffle=True, random_state=42,
remove=remove)
data_test = fetch_20newsgroups(subset='test', categories=categories,
shuffle=True, random_state=42,
remove=remove)
使用您的数据加载语句,例如:
# Load some categories from the training set
categories = [
'high',
'low',
]
print("loading dataset for categories:")
print(categories if categories else "all")
train_path='c:/Users/username/Documents/SciKit/train'
data_train = load_files(train_path, encoding='latin1')
test_path='c:/Users/username/Documents/SciKit/test'
data_test = load_files(test_path, encoding='latin1')
并且在每个train和test目录中为类别文件创建"high"one_answers"low"子目录。