从Pyspark DataFrame创建Numpy矩阵



我有一个带有列的pyspark dataframe child,例如:

lat1 lon1
80    70
65    75

我试图使用indexedRowmatrix将其转换为numpy矩阵:

from pyspark.mllib.linalg.distributed import IndexedRow, IndexedRowMatrix
mat = IndexedRowMatrix(child.select('lat','lon').rdd.map(lambda row: IndexedRow(row[0], Vectors.dense(row[1:]))))

,但这给我带来了错误。我想避免转换为pandas dataframe以获取矩阵。

错误:

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 33.0 failed 4 times, most recent failure: Lost task 0.3 in stage 33.0 (TID 733, ebdp-avdc-d281p.sys.comcast.net, executor 16): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/data/02/yarn/nm/usercache/mbansa001c/appcache/application_1506130884691_56333/container_e48_1506130884691_56333_01_000017/pyspark.zip/pyspark/worker.py", line 174, in main
    process()

您想避免熊猫,但是您尝试转换为rdd,这是非常优势的...

无论如何,假设您可以collect child DataFrame的选定列(一个合理的假设,因为您的目标是将它们放入Numpy数组中),则可以使用Pline Numpy:

来完成。
import numpy as np
np.array(child.select('lat1', 'lon1').collect())
# array([[80, 70], 
#        [65, 75]])

相关内容

  • 没有找到相关文章

最新更新