按派生列值对数据集进行分组



我想按时间戳列的派生值(即年份)对以下数据集进行分组,因此是时间戳列的可预测子字符串。

doi                         timestamp
10.1515/cipms-2015-0089     2016-06-09T18:29:46.000046+01:00
10.1515/cipms-2015-0089     2016-06-09T18:29:46.000046+01:00
10.1007/s13595-016-0548-3   2015-06-08T17:01:10.000010+01:00

我意识到我可以在此基础上添加自己的派生列和过滤器,但是是否有一种方法可以在单个groupBy语句中指定它,而不添加纯粹用于分组目的的额外列?

如果我正确理解了你的问题,你需要在组内提取年份by子句:

import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.functions.{unix_timestamp, year}
val sc: SparkContext = ??? // I consider that you are able to create both your SparkContext and SQLContext alone
val sqlContext: SQLContext = ???
import sqlContext.implicits._ // needed to use implicits like .toDF
val data = Seq(
  "10.1515/cipms-2015-0089     2016-06-09T18:29:46.000046+01:00",
  "10.1515/cipms-2015-0089     2016-06-09T18:29:46.000046+01:00",
  "10.1007/s13595-016-0548-3   2015-06-08T17:01:10.000010+01:00")
// data: Seq[String] = List(10.1515/cipms-2015-0089     2016-06-09T18:29:46.000046+01:00, 10.1515/cipms-2015-0089     2016-06-09T18:29:46.000046+01:00, 10.1007/s13595-016-0548-3   2015-06-08T17:01:10.000010+01:00)
val df = sc.parallelize(data).map(_.split("\s+") match {
  case Array(doi, time) => (doi, time)
}).toDF("doi", "timestamp").withColumn("timestamp", unix_timestamp($"timestamp", "yyyy-MM-dd'T'hh:mm:ss").cast("timestamp"))
// df: org.apache.spark.sql.DataFrame = [doi: string, timestamp: timestamp]
df.groupBy(year($"timestamp").as("year")).count.show
// +----+-----+
// |year|count|
// +----+-----+
// |2015|    1|
// |2016|    2|
// +----+-----+

相关内容

  • 没有找到相关文章

最新更新