GraphFrames 的 PageRank 中的错误



我是pyspark的新手,正在尝试了解PageRank的工作原理。我在Cloudera上的Jupyter中使用Spark 1.6。我的顶点和边(以及模式(的屏幕截图在这些链接中:顶点RDD和边RDD

到目前为止,我有以下代码:

#import relevant libraries for Graph Frames
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.functions import desc
from graphframes import *
#Read the csv files 
verticesRDD = sqlContext.read.format("com.databricks.spark.csv").options(header='true', inferschema='true').load("filepath/station.csv")
edgesRDD = sqlContext.read.format("com.databricks.spark.csv").options(header='true', inferschema='true').load("filepath/trip.csv")
#Renaming the id columns to enable GraphFrame 
verticesRDD = verticesRDD.withColumnRenamed("station_ID", "id")
edgesRDD = edgesRDD.withColumnRenamed("Trip ID", "id")
edgesRDD = edgesRDD.withColumnRenamed("Start Station", "src")
edgesRDD = edgesRDD.withColumnRenamed("End Station", "dst")
#Register as temporary tables for running the analysis
verticesRDD.registerTempTable("verticesRDD")
edgesRDD.registerTempTable("edgesRDD")
#Note: whether i register the RDDs as temp tables or not, i get the same results... so im not sure if this step is really needed
#Make the GraphFrame
g = GraphFrame(verticesRDD, edgesRDD)

现在,当我运行 pageRank 函数时:

g.pageRank(resetProbability=0.15, maxIter=10)

Py4JJava错误:调用 o98.run 时出错:org.apache.spark.Spark异常:作业由于阶段失败而中止:阶段 79.0 中的任务 0 失败 1 次,最近失败:阶段 79.0 中丢失任务 0.0(TID 2637,本地主机(:scala。MatchError: [null,null,[913460,765,8/31/2015 23:26,Harry Bridges Plaza (Ferry Building(,50,8/31/2015 23:39,San Francisco Caltrain (Townsend at 4th(,70,288,Subscriber,2139]] (class org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema(

results = g.pageRank(resetProbability=0.15, maxIter=10, sourceId="id")

Py4JJavaError:调用 o166.run 时出错:org.graphframes.NoSuchVertexException:GraphFrame 算法给定的顶点 ID 在 Graph 中不存在。GraphFrame 中不包含的顶点 ID id(v

:[id: int, name: string, lat: double, long: double, dockcount: int, landmark: string, installation: string], e:[src: string, dst: string, id: int, duration: int, start date: string, start terminal: int, end Date: string, End terminal: int, Bike #: int, Subscriber Type: string, Zip Code: string](
ranks = g.pageRank.resetProbability(0.15).maxIter(10).run()

属性错误:"函数"对象没有属性"重置概率">

ranks = g.pageRank(resetProbability=0.15, maxIter=10).run()

Py4JJava错误:调用 o188.run 时出错:org.apache.spark.Spark异常:作业由于阶段失败而中止:阶段 90.0 中的任务 0 失败 1 次,最近一次失败:阶段 90.0 中丢失任务 0.0(TID 2641,本地主机(:scala。MatchError: [null,null,[913460,765,8/31/2015 23:26,Harry Bridges Plaza (Ferry Building(,50,8/31/2015 23:39,San Francisco Caltrain (Townsend at 4th(,70,288,Subscriber,2139]] (class org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema(

我正在阅读PageRank,但不明白我哪里出错了..任何帮助将不胜感激

问题是我如何定义我的顶点。我将"station_id"重命名为"id",而实际上,它必须是"名称"。所以这条线

verticesRDD = verticesRDD.withColumnRenamed("station_ID", "id")

必须是

verticesRDD = verticesRDD.withColumnRenamed("name", "id")

页面排名通过此更改正常工作!

相关内容

  • 没有找到相关文章

最新更新