如何将时间戳的小时部分提取到DataFrame Spark中的新列中



我想从当前时间戳列中提取小时,并将小时值存储在数据帧的新列中。请帮助

这应该有效:

val DF2 = DF1.withColumn("col_1", trim(DF1("col_1")))

希望这将有助于

val df = Seq((" Virat ",18,"RCB"),("Rohit ",45,"MI "),(" DK",67,"KKR ")).toDF("captains","jersey_number","teams")
scala> df.show
+--------+-------------+-----+
|captains|jersey_number|teams|
+--------+-------------+-----+
|  Virat |           18|  RCB|
|  Rohit |           45|  MI |
|      DK|           67| KKR |
+--------+-------------+-----+
scala>val trimmedDF = df.withColumn("captains",trim(df("captains"))).withColumn("teams",trim(df("teams")))
scala> trimmedDF.show
+--------+-------------+-----+
|captains|jersey_number|teams|
+--------+-------------+-----+
|   Virat|           18|  RCB|
|   Rohit|           45|   MI|
|      DK|           67|  KKR|
+--------+-------------+-----+

您可以使用列操作可用的函数之一:

对于Scala:

import org.apache.spark.sql.functions._
val df2 = df.withColumn("hour", hour(col("timestamp_column")))

对于Python:

from pyspark.sql.functions import *
df2 = df.withColumn('hour', hour(col('timestamp_column')))

参考:

  • org.apache.spark.functions
  • pyspark.sql.functions

最新更新