我使用 HiveContext 创建了数据帧,其中一列包含如下记录:
text1 text2
我们希望将 2 个文本之间的空格替换为单个文本,并获得最终输出为:
text1 text2
我们可以在Spark SQL中实现这一点吗?请注意,我们正在使用 Hive 上下文,注册临时表并在其上编写 SQL 查询。
更好的是,我现在得到了一位真正的专家的启发。其实更简单:
import org.apache.spark.sql.functions._
// val myUDf = udf((s:String) => Array(s.trim.replaceAll(" +", " ")))
val myUDf = udf((s:String) => s.trim.replaceAll("\s+", " ")) // <-- no Array(...)
// Then there is no need to play with columns excessively:
val data = List("i like cheese", " the dog runs ", "text111111 text2222222")
val df = data.toDF("val")
df.show()
val new_df = df.withColumn("new_val", myUDf(col("val")))
new_df.show
import org.apache.spark.sql.functions._
val myUDf = udf((s:String) => Array(s.trim.replaceAll(" +", " ")))
//error: object java.lang.String is not a value --> use Array
val data = List("i like cheese", " the dog runs ", "text111111 text2222222")
val df = data.toDF("val")
df.show()
val new_df = df
.withColumn("udfResult",myUDf(col("val")))
.withColumn("new_val", col("udfResult")(0))
.drop("udfResult")
new_df.show
数据砖上的输出
+--------------------+
| val|
+--------------------+
| i like cheese|
| the dog runs |
|text111111 text...|
+--------------------+
+--------------------+--------------------+
| val| new_val|
+--------------------+--------------------+
| i like cheese| i like cheese|
| the dog runs | the dog runs|
|text111111 text...|text111111 text22...|
+--------------------+--------------------+
只是在火花中做.sql
regexp_replace( 列, ' +', ' '(
https://spark.apache.org/docs/latest/api/sql/index.html#regexp_replace
检查一下:
spark.sql("""
select regexp_replace(col1, ' +', ' ') as col2
from (
select 'text1 text2 text3' as col1
)
""").show(20,False)
输出
+-----------------+
|col2 |
+-----------------+
|text1 text2 text3|
+-----------------+