在 DF 中从字符串强制转换为 Int 会导致 null 而不是数字



这是我的示例代码:

import org.apache.spark.sql.functions._
import org.apache.spark.sql.types.IntegerType
val marketingproj5DF2 = marketingproj5DF.withColumn("ageTmp", 'age.cast(IntegerType)).drop("age").withColumnRenamed("ageTmp","age")

以下是DF之后的样子:

scala> marketingproj5DF2.show(5)
+--------+----------------+-----------+-------------+-----------+-----------+-----------+--------+-----------+-------+---------+------------+------------+---------
+------------+------------+-------+------+
|     age|             job|    marital|    education|    default|    balance|    housing|    loan|    contact|    day|    month|    duration|    campaign|    pdays
|    previous|    poutcome|      y|ageTmp|
+--------+----------------+-----------+-------------+-----------+-----------+-----------+--------+-----------+-------+---------+------------+------------+---------
+------------+------------+-------+------+
|"""age""|         ""job""|""marital""|""education""|""default""|""balance""|""housing""|""loan""|""contact""|""day""|""month""|""duration""|""campaign""|""pdays""
|""previous""|""poutcome""| ""y"""|  null|
|     "58|  ""management""|""married""| ""tertiary""|     ""no""|       2143|    ""yes""|  ""no""|""unknown""|      5|  ""may""|         261|           1|       -1
|           0| ""unknown""|""no"""|  null|
|     "44|  ""technician""| ""single""|""secondary""|     ""no""|         29|    ""yes""|  ""no""|""unknown""|      5|  ""may""|         151|           1|       -1
|           0| ""unknown""|""no"""|  null|
|     "33|""entrepreneur""|""married""|""secondary""|     ""no""|          2|    ""yes""| ""yes""|""unknown""|      5|  ""may""|          76|           1|       -1
|           0| ""unknown""|""no"""|  null|
|     "47| ""blue-collar""|""married""|  ""unknown""|     ""no""|       1506|    ""yes""|  ""no""|""unknown""|      5|  ""may""|          92|           1|       -1
|           0| ""unknown""|""no"""|  null|
+--------+----------------+-----------+-------------+-----------+-----------+-----------+--------+-----------+-------+---------+------------+------------+---------
+------------+------------+-------+------+
only showing top 5 rows

我正在使用Spark 1.6 Scala 2.10.5。第一列是我原来的"年龄"列,数据是从.csv导入的,除非我将其保留为字符串,否则我无法将所有数据放入 DF,现在我有"年龄"列,我正在尝试转换/转换字段并针对它进行查询。

问题是由于年龄列中的额外"。在将列转换为 Int 之前,需要将其删除。此外,您不需要使用临时列,删除原始列,然后将临时列重命名为原始名称。只需使用 withColumn() 即可覆盖原始文件。

regexp_replace可以解决额外的"问题:

val df = Seq(""58",""44",""33",""47").toDF("age")
val df2 = df.withColumn("age", regexp_replace($"age", """, "").cast(IntegerType))

这将给出所需的结果:

+---+
|age|
+---+
| 58|
| 44|
| 33|
| 47|
+---+

import org.apache.spark.sql

val marketingproj5DF2 = marketingproj5DF.withColumn("age", $"age".cast(sql.types.IntegerType((

相关内容

  • 没有找到相关文章

最新更新