我尝试在Python二进制分类:Twitter情感分析上复制这个项目
步骤如下:
Step 1: Get data
Step 2: Text preprocessing using R
Step 3: Feature engineering
Step 4: Split the data into train and test
Step 5: Train prediction model
Step 6: Evaluate model performance
Step 7: Publish prediction web service
我现在在Step 4
,但我想我不能继续。
import pandas
import re
from sklearn.feature_extraction import FeatureHasher
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
from sklearn import cross_validation
#read the dataset of tweets
header_row=['sentiment','tweetid','date','query', 'user', 'text']
train = pandas.read_csv("training.1600000.processed.noemoticon.csv",names=header_row)
#keep only the right columns
train = train[["sentiment","text"]]
#remove puctuation, special characters, numbers and lower case the text
def remove_spch(text):
return re.sub("[^a-z]", ' ', text.lower())
train['text'] = train['text'].apply(remove_spch)
#Feature Hashing
def tokens(doc):
"""Extract tokens from doc.
This uses a simple regex to break strings into tokens.
"""
return (tok.lower() for tok in re.findall(r"w+", doc))
n_features = 2**18
hasher = FeatureHasher(n_features=n_features, input_type="string", non_negative=True)
X = hasher.transform(tokens(d) for d in train['text'])
#Feature Selection and choose the best 20.000 features using Chi-Square
X_new = SelectKBest(chi2, k=20000).fit_transform(X, train['sentiment'])
#Using Stratified KFold, split my data to train and test
skf = cross_validation.StratifiedKFold(X_new, n_folds=2)
我确信最后一行是错误的,因为它只包含了20000个特征,而不是熊猫的Sentiment
列。我怎样才能"加入"?稀疏矩阵X_new
与数据框train
,将其包括在cross_validation
上,然后将其用于分类器?
你应该将你的类标签传递给StratifiedKFold,然后使用skf作为迭代器,在每次迭代中它将生成测试集和训练集的索引,你可以使用它们来分离数据集。
查看官方scikit-learn文档中的代码示例:StratifiedKFold