Pyspark VectorAssembler on Ngram/Tokenizer Transofmed Datafr



如果我有一个带有字段['div','doc']的dataframe,例如

data = sc.parallelize(['This is a test',
                   'This is also a test',
                   'These sentence are tests',
                   'This tests these sentences'])
         .zipWithIndex()
         .map(lambda x: (x[1],x[0]))
         .toDF(['did','doc'])
data.show()
+---+--------------------+--------------------+
|did|                 doc|               words|
+---+--------------------+--------------------+
|  0|      This is a test| [this, is, a, test]|
|  1| This is also a test|[this, is, also, ...|
|  2|These sentence ar...|[these, sentence,...|
|  3|This tests these ...|[this, tests, the...|
+---+--------------------+--------------------+

我在该文档上进行了一些转换,例如令牌化和查找2-gram:

data = Tokenizer(inputCol = 'doc',outputCol = 'words').transform(data)
data = NGram(n=2,inputCol = 'words',outputCol='grams').transform(data)
data.show()
+---+--------------------+--------------------+--------------------+
|did|                 doc|               words|               grams|
+---+--------------------+--------------------+--------------------+
|  0|      This is a test| [this, is, a, test]|[this is, is a, a...|
|  1| This is also a test|[this, is, also, ...|[this is, is also...|
|  2|These sentence ar...|[these, sentence,...|[these sentence, ...|
|  3|This tests these ...|[this, tests, the...|[this tests, test...|
+---+--------------------+--------------------+--------------------+

然后,在最后,我想将两克和单词与vectorAssembler:

组合到单个特征列中:
data = VectorAssembler(inputCol=['words','grams'],
                       outputCol='features').transform(data)

然后我收到以下错误:

Py4JJavaError: An error occurred while calling o504.transform.
: java.lang.IllegalArgumentException: Data type ArrayType(StringType,true) is not supported.

因为vectorAssembler不喜欢使用字符串列表。为了解决这个问题,我可以将数据框架放到RDD中,将RDD映射到适当的行,然后将其重新拉回到dataframe,la

data = data.rdd.map(lambda x: Row(did = x['did'], 
           features = x['words']+x['grams'])) .toDF(['did','features'])

这对这个微小的数据集来说不是问题,但是对于大型数据集来说,这非常昂贵。

有什么方法比上述更有效地实现这一目标?

您可以使用UDF创建功能列,例如此

import pyspark.sql.functions as f
import pyspark.sql.types as t

udf_add = f.udf(lambda x,y: x+y, t.ArrayType(t.StringType()))
data.withColumn('features', udf_add('words','grams')).select('features').show()
[Row(features=['this', 'is', 'a', 'test', 'this is', 'is a', 'a test']),
Row(features=['this', 'is', 'also', 'a', 'test', 'this is', 'is also', 'also a', 'a test']),
Row(features=['these', 'sentence', 'are', 'tests', 'these sentence', 'sentence are', 'are tests']),
Row(features=['this', 'tests', 'these', 'sentences', 'this tests', 'tests these', 'these sentences'])]

最新更新