Scala Spark 中 udf 的运行时错误



>我正在尝试在数据帧中创建一个新列。 此新列将包含从 Long 时间戳(以毫秒为单位(创建的格式化数据字符串。

我不断收到此错误:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.spark.sql.DataFrameReader.jdbc(Ljava/lang/String;Ljava/lang/String;Ljava/util/Properties;)Lorg/apache/spark/sql/Dataset;

它发生在以下代码中:

import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import org.apache.spark.sql.{DataFrame, SQLContext}
import joptsimple.OptionParser
import org.apache.spark.sql.functions._
import java.text.SimpleDateFormat
import org.apache.spark.sql.functions.udf
    .
    .
    .
    val formatDateUDF = udf((ts: Long) => {
      new SimpleDateFormat("yyyy.MM.dd.HH.mm.ss").format(ts)
    })

我在build.sbt中使用以下依赖项:

scalaVersion := "2.11.11"
libraryDependencies ++= Seq(
  // Spark dependencies
  "org.apache.spark" % "spark-hive_2.11" % "2.1.1" % "provided",
  "org.apache.spark" % "spark-mllib_2.11" % "2.1.1" % "provided",
  // Third-party libraries
  "postgresql" % "postgresql" % "9.1-901-1.jdbc4",
  "net.sf.jopt-simple" % "jopt-simple" % "5.0.3",
  "org.scalactic" %% "scalactic" % "3.0.1",
  "org.scalatest" %% "scalatest" % "3.0.1" % "test",
  "joda-time" % "joda-time" % "2.9.9"
)

我对其他可能更容易(或者至少是工作(的方法持开放态度。

我认为

from_unixtime方法应该更好用吗?

val input = List(
  ("a",1497348453L),
  ("b",1497345453L),
  ("c",1497341453L),
  ("d",1497340453L)
).toDF("name", "timestamp")

input.select(
  'name,
  from_unixtime('timestamp, "yyyy.MM.dd.HH.mm.ss").alias("timestamp_formatted")
).show()

输出:

+----+-------------------+
|name|timestamp_formatted|
+----+-------------------+
|   a|2017.06.13.12.07.33|
|   b|2017.06.13.11.17.33|
|   c|2017.06.13.10.10.53|
|   d|2017.06.13.09.54.13|
+----+-------------------+

相关内容