如何使用sklearn Pipeline和FeatureUnion选择多个(数字和文本)列进行文本分类?



我开发了一个用于多标签分类的文本模型。OneVsRestClassifier LinearSVC模型使用sklearnsPipelineFeatureUnion进行模型准备。

主要的输入特征包括一个称为response的文本列,以及5个称为"t1_prob-t5_prob"的主题概率(从以前的LDA主题模型生成(,以预测5个可能的标签。在生成TfidfVectorizer的管道中还有其他特征创建步骤。

最后,我用ItemSelector调用了每一列,并分别对这些主题概率列执行了5次ArrayCaster(函数定义请参见下面的代码(。有没有更好的方法可以使用FeatureUnion来选择管道中的多个列?(所以我不必做5次(

我想知道是否有必要复制topic1_feature-topic5_feature代码,或者是否可以以更简洁的方式选择多个列?

我输入的数据是Pandas数据帧:

id response label_1 label_2 label3  label_4 label_5     t1_prob t2_prob t3_prob t4_prob t5_prob
1   Text from response...   0.0 0.0 0.0 0.0 0.0 0.0     0.0625  0.0625  0.1875  0.0625  0.1250
2   Text to model with...   0.0 0.0 0.0 0.0 0.0 0.0     0.1333  0.1333  0.0667  0.0667  0.0667  
3   Text to work with ...   0.0 0.0 0.0 0.0 0.0 0.0     0.1111  0.0938  0.0393  0.0198  0.2759  
4   Free text comments ...  0.0 0.0 1.0 1.0 0.0 0.0     0.2162  0.1104  0.0341  0.0847  0.0559  

x_train是response和5个主题概率列(t1_prob、t2_prob和t3_prob(。

y_train是5个label列,我对其调用.values以返回DataFrame的numpy表示。(label_1、label_2、label_3、label_4、label_5(

示例数据帧:

import pandas as pd
column_headers = ["id", "response", 
"label_1", "label_2", "label3", "label_4", "label_5",
"t1_prob", "t2_prob", "t3_prob", "t4_prob", "t5_prob"]
input_data = [
[1, "Text from response",0.0,0.0,1.0,0.0,0.0,0.0625,0.0625,0.1875,0.0625,0.1250],
[2, "Text to model with",0.0,0.0,0.0,0.0,0.0,0.1333,0.1333,0.0667,0.0667,0.0667],
[3, "Text to work with",0.0,0.0,0.0,0.0,0.0,0.1111,0.0938,0.0393,0.0198,0.2759],
[4, "Free text comments",0.0,0.0,1.0,1.0,1.0,0.2162,0.1104,0.0341,0.0847,0.0559]
]
df = pd.DataFrame(input_data, columns = column_headers)
df = df.set_index('id')
df

我认为我的实现有点全面,因为FeatureUnion只会在组合二维数组时处理它们,所以像DataFrame这样的任何其他类型对我来说都是有问题的。然而,这个例子很有效——我只是在寻找改进它的方法,使它更干燥。

from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
class ItemSelector(BaseEstimator, TransformerMixin):
def __init__(self, column):
self.column = column
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
return X[self.column]
class ArrayCaster(BaseEstimator, TransformerMixin):
def fit(self, x, y=None):
return self
def transform(self, data):
return np.transpose(np.matrix(data))

def basic_text_model(trainX, testX, trainY, testY, classLabels, plotPath):
'''OneVsRestClassifier for multi-label prediction''' 
pipeline = Pipeline([
('features', FeatureUnion([
('topic1_feature', Pipeline([
('selector', ItemSelector(column='t1_prob')),
('caster', ArrayCaster())
])),
('topic2_feature', Pipeline([
('selector', ItemSelector(column='t2_prob')),
('caster', ArrayCaster())
])),
('topic3_feature', Pipeline([
('selector', ItemSelector(column='t3_prob')),
('caster', ArrayCaster())
])),
('topic4_feature', Pipeline([
('selector', ItemSelector(column='t4_prob')),
('caster', ArrayCaster())
])),
('topic5_feature', Pipeline([
('selector', ItemSelector(column='t5_prob')),
('caster', ArrayCaster())
])),
('word_features', Pipeline([
('vect', CountVectorizer(analyzer="word", stop_words='english')), 
('tfidf', TfidfTransformer(use_idf = True)),
])),
])),
('clf', OneVsRestClassifier(svm.LinearSVC(random_state=random_state))) 
])
# Fit the model
pipeline.fit(trainX, trainY)
predicted = pipeline.predict(testX)

我将ArrayCaster纳入这一过程的原因就是这个答案。

受@Marcus V解决方案的启发,我使用FunctionTransformer找到了这个问题的答案。修改后的管道要简洁得多。

from sklearn.preprocessing import FunctionTransformer
get_numeric_data = FunctionTransformer(lambda x: x[['t1_prob', 't2_prob', 't3_prob', 't4_prob', 't5_prob']], validate=False)
pipeline = Pipeline(
[
(
"features",
FeatureUnion(
[
("numeric_features", Pipeline([("selector", get_numeric_data)])),
(
"word_features",
Pipeline(
[
("vect", CountVectorizer(analyzer="word", stop_words="english")),
("tfidf", TfidfTransformer(use_idf=True)),
]
),
),
]
),
),
("clf", OneVsRestClassifier(svm.LinearSVC(random_state=10))),
]
)

最新更新