使用 Pyspark 将时间戳写入 Postgres



我正在Python上开发Spark脚本(使用Pyspark)。我有一个函数,它返回一个带有一些字段的 Ro w,包括

timestamp=datetime.strptime(processed_data[1], DATI_REGEX)

processed_data[1] 是一个有效的日期时间字符串。

编辑以显示完整代码:

DATI_REGEX = "%Y-%m-%dT%H:%M:%S"
class UserActivity(object):
    def __init__(self, user, rows):
        self.user = int(user)
        self.rows = sorted(rows, key=operator.attrgetter('timestamp'))
    def write(self):
        return Row(
            user=self.user,
            timestamp=self.rows[-1].timestamp,
        )
def parse_log_line(logline):
    try:
       entries = logline.split('\t')
       processed_data = entries[0].split('t') + entries[1:]
       return Row(
           ip_address=processed_data[9],
           user=int(processed_data[10]),
           timestamp=datetime.strptime(processed_data[1], DATI_REGEX),
       )
     except (IndexError, ValueError):
          return None

logFile = sc.textFile(...)
rows = (log_file.map(parse_log_line).filter(None)
        .filter(lambda x: current_day <= x.timestamp < next_day))
user_rows = rows.map(lambda x: (x.user, x)).groupByKey()
user_dailies = user_rows.map(lambda x: UserActivity(current_day, x[0], x[1]).write())

当我尝试在PostgreSQL数据库上编写它并执行以下操作时,问题就来了:

fields = [
    StructField("user_id", IntegerType(), False),
    StructField("timestamp", TimestampType(), False),
]
schema = StructType(fields)
user_dailies_schema = SQLContext(sc).createDataFrame(user_dailies, schema)
user_dailies_schema.write.jdbc(
    "jdbc:postgresql:.......",
    "tablename")

我收到以下错误:

Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/Users/pau/Downloads/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 172, in main
    process()
  File "/Users/pau/Downloads/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 167, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/Users/pau/Downloads/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
    vs = list(itertools.islice(iterator, batch))
  File "/Users/pau/Downloads/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/types.py", line 576, in toInternal
  File "/Users/pau/Downloads/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/types.py", line 576, in <genexpr>
  File "/Users/pau/Downloads/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/types.py", line 436, in toInternal
    return self.dataType.toInternal(obj)
  File "/Users/pau/Downloads/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/types.py", line 190, in toInternal
    seconds = (calendar.timegm(dt.utctimetuple()) if dt.tzinfo
AttributeError: 'int' object has no attribute 'tzinfo'
    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:86)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    ... 1 more

关于如何解决这个问题的任何想法?

问题相对简单。PySpark Row 是按字段名称排序的tuple。这意味着当您创建时:

Row(user=self.user, timestamp=self.rows[-1].timestamp)

输出结构的顺序如下:

Row(timestamp, user)

另一方面StructType按原样排序。因此,您的代码尝试使用用户 ID 作为时间戳。您应该返回一个普通tuple

class UserActivity(object):
    ...
    def write(self):
        return (self.user, timestamp)

或使用按字典顺序排序的架构:

schema = StructType(sorted(fields, key=operator.attrgetter("name")))

最后,您可以使用namedtuple来实现属性访问和预定义顺序。

附带说明一下,不要使用这样的groupByKey。这是使用reduceByKey的典型情况:

(log_file.map(parse_log_line)
    .map(operator.attrgetter("user", "timestamp"))
    .reduceByKey(max))

具有多个字段:

from functools import partial
(log_file.map(parse_log_line)
    .map(lambda x: (x.user, x))
    .reduceByKey(partial(max, key=operator.itemgetter("timestamp")))
    .values())

DataFrame聚合:

from pyspark.sql import functions as f
(sqlContext
    .createDataFrame(
        log_file.map(parse_log_line)
          # Another way to handle ordering is to choose fields
          # before you call createDataFrame
          .map(operator.attrgetter("user", "timestamp")),
        schema)
    .groupBy("user_id")
    .agg(f.max("timestamp").alias("timestamp")))

此外,如果要检索SQLContext则应使用工厂方法:

SQLContext.getOrCreate(sc)

像您一样创建新上下文可能会产生意想不到的副作用。

相关内容

  • 没有找到相关文章

最新更新