阅读<COLUMN-NAME> pyspark 中的专栏内容



我正在使用Spark 1.5.0

我有一个类似下面的数据框

>>> words = tokenizer.transform(sentenceData)
>>> words
DataFrame[label: bigint, sentence: string, words: array<string>]
>>> words['words']
Column<words>

我想从句子中读取所有单词(vocab)。我该如何阅读此

编辑1:错误仍然盛行

我现在在Spark 2.0.0中运行此错误

>>> wordsData.show()
+--------------------+--------------------+
|                desc|               words|
+--------------------+--------------------+
|Virat is good bat...|[virat, is, good,...|
|     sachin was good| [sachin, was, good]|
|but modi sucks bi...|[but, modi, sucks...|
| I love the formulas|[i, love, the, fo...|
+--------------------+--------------------+
>>> wordsData
DataFrame[desc: string, words: array<string>]

>>> vocab = wordsData.select(explode('words')).rdd.flatMap(lambda x: x)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/pyspark/rdd.py", line 305, in flatMap
    return self.mapPartitionsWithIndex(func, preservesPartitioning)
  File "/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/pyspark/rdd.py", line 330, in mapPartitionsWithIndex
    return PipelinedRDD(self, f, preservesPartitioning)
  File "/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/pyspark/rdd.py", line 2383, in __init__
    self._jrdd_deserializer = self.ctx.serializer
AttributeError: 'SparkSession' object has no attribute 'serializer'

编辑的分辨率-1 - 链接

您可以:

from pyspark.sql.functions import explode
words.select(explode('words')).rdd.flatMap(lambda x: x)

相关内容

  • 没有找到相关文章

最新更新