使用Apache Spark将表序列化为嵌套JSON



我有一组记录,如以下示例

|ACCOUNTNO|VEHICLENUMBER|CUSTOMERID|
+---------+-------------+----------+
| 10003014|    MH43AJ411|  20000000|
| 10003014|    MH43AJ411|  20000001|
| 10003015|   MH12GZ3392|  20000002|

我想解析成JSON,它应该是这样的:

{
"ACCOUNTNO":10003014,
"VEHICLE": [
{ "VEHICLENUMBER":"MH43AJ411", "CUSTOMERID":20000000},
{ "VEHICLENUMBER":"MH43AJ411", "CUSTOMERID":20000001}
],
"ACCOUNTNO":10003015,
"VEHICLE": [
{ "VEHICLENUMBER":"MH12GZ3392", "CUSTOMERID":20000002}
]
}

我已经编写了程序,但未能实现输出。

package com.report.pack1.spark
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql._

object sqltojson {
def main(args:Array[String]) {
System.setProperty("hadoop.home.dir", "C:/winutil/")
val conf = new SparkConf().setAppName("SQLtoJSON").setMaster("local[*]")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._      
val jdbcSqlConnStr = "jdbc:sqlserver://192.168.70.88;databaseName=ISSUER;user=bhaskar;password=welcome123;"      
val jdbcDbTable = "[HISTORY].[TP_CUSTOMER_PREPAIDACCOUNTS]"
val jdbcDF = sqlContext.read.format("jdbc").options(Map("url" -> jdbcSqlConnStr,"dbtable" -> jdbcDbTable)).load()
jdbcDF.registerTempTable("tp_customer_account")
val res01 = sqlContext.sql("SELECT ACCOUNTNO, VEHICLENUMBER, CUSTOMERID FROM tp_customer_account GROUP BY ACCOUNTNO, VEHICLENUMBER, CUSTOMERID ORDER BY ACCOUNTNO ")
res01.coalesce(1).write.json("D:/res01.json")      
}
}

如何以给定的格式进行序列化?提前感谢!

您可以使用structgroupBy来获得所需的结果。以下是相同的代码。只要需要,我都会对代码进行注释。

val df = Seq((10003014,"MH43AJ411",20000000),
(10003014,"MH43AJ411",20000001),
(10003015,"MH12GZ3392",20000002)
).toDF("ACCOUNTNO","VEHICLENUMBER","CUSTOMERID")
df.show
//output
//+---------+-------------+----------+
//|ACCOUNTNO|VEHICLENUMBER|CUSTOMERID|
//+---------+-------------+----------+
//| 10003014|    MH43AJ411|  20000000|
//| 10003014|    MH43AJ411|  20000001|
//| 10003015|   MH12GZ3392|  20000002|
//+---------+-------------+----------+
//create a struct column then group by ACCOUNTNO column and finally convert DF to JSON
df.withColumn("VEHICLE",struct("VEHICLENUMBER","CUSTOMERID")).
select("VEHICLE","ACCOUNTNO"). //only select reqired columns
groupBy("ACCOUNTNO"). 
agg(collect_list("VEHICLE").as("VEHICLE")). //for the same group create a list of vehicles
toJSON. //convert to json
show(false)
//output
//+------------------------------------------------------------------------------------------------------------------------------------------+
//|value                                                                                                                                     |
//+------------------------------------------------------------------------------------------------------------------------------------------+
//|{"ACCOUNTNO":10003014,"VEHICLE":[{"VEHICLENUMBER":"MH43AJ411","CUSTOMERID":20000000},{"VEHICLENUMBER":"MH43AJ411","CUSTOMERID":20000001}]}|
//|{"ACCOUNTNO":10003015,"VEHICLE":[{"VEHICLENUMBER":"MH12GZ3392","CUSTOMERID":20000002}]}                                                   |
//+------------------------------------------------------------------------------------------------------------------------------------------+

您也可以使用和您在问题中提到的相同的语句将这个dataframe写入一个文件。

最新更新