如何变量json字段名称spark



我有json日志文件(json分隔符/n(,需要spark结构类型,但在我的txt文件中,每个json第一个字段名都不同我该怎么做?

val elementSchema = new StructType()
.add("name",StringType,true)
.add("object_type",StringType,true)
.add("privilege",StringType,true)
val simpleSchema = new StructType()
.add("authorization_failure",StringType,true)
.add("catalog_objects",elementSchema,true)
.add("impersonator",StringType,true)
.add("network_address",StringType,true)
.add("query_id",StringType,true)
.add("session_id",StringType,true)
.add("sql_statement",StringType,true)
.add("start_time",StringType,true)
.add("statement_type",StringType,true)
.add("status",StringType,true)
.add("user",StringType,true)
val anaSchema = new StructType()
.add("saasd",StringType,true)
val config = new SparkConf()`
config.set("spark.sql.shuffle.partitions","300")
val spark=SparkSession.builder().config(config).master("local[2]")
.appName("Example")
.getOrCreate()
val dataframe = spark.read
.json(s"/home/ogn/denemeler/big_data/impala_audit_spark/file/testa.txt")
dataframe.printSchema()
val df =dataframe.select(to_json( struct( dataframe.columns.map(col(`_`)):`_`*  ) ).alias("all"))

预期

每个字段结构

authorization_failure|catalog_objects|impersonator|network_address|query_id|session_id|sql_statement|start_time|statement_type|status|user|

testa.txt内容是在单个文件中有近3m的json

{"1648039261379":{"query_id":"x","session_id":"da40931781b4b8ed:978bb8edb9177dbd","start_time":"2022-03-23 15:41:01.234826","authorization_failure":false,"status":"","user":"x","impersonator":null,"statement_type":"QUERY","network_address":"x","sql_statement":"y","catalog_objects":[{"name":"_impala_builtins","object_type":"DATABASE","privilege":"VIEW_METADATA"},{"name":"s","object_type":"TABLE","privilege":"SELECT"}]}}
{"1648039261510":{"query_id":"x","session_id":"344247956fada236:7d9c0930b7c51b9a","start_time":"2022-03-23 15:41:01.507023","authorization_failure":false,"status":"","user":"x","impersonator":null,"statement_type":"USE","network_address":"x","sql_statement":"t","catalog_objects":[{"name":"g","object_type":"DATABASE","privilege":"ANY"}]}}

步骤1:使用textFile:将Json文件作为简单文本文件读取

val ds: Dataset[String] = spark.read.textFile("testa.txt")

步骤2:使用regexp_extract删除第一个Json级别。您也可以解析json字符串,但我认为这种方法更快。

import spark.implicits._
val ds2: Dataset[String] = ds.withColumn("value", regexp_extract('value, "\{.*:(\{.*\})\}", 1)).as[String]

步骤3:将字符串解析为数据帧:

val df3: DataFrame = spark.read.json(ds2)

df3现在具有结构

root
|-- authorization_failure: boolean (nullable = true)
|-- catalog_objects: array (nullable = true)
|    |-- element: struct (containsNull = true)
|    |    |-- name: string (nullable = true)
|    |    |-- object_type: string (nullable = true)
|    |    |-- privilege: string (nullable = true)
|-- impersonator: string (nullable = true)
|-- network_address: string (nullable = true)
|-- query_id: string (nullable = true)
|-- session_id: string (nullable = true)
|-- sql_statement: string (nullable = true)
|-- start_time: string (nullable = true)
|-- statement_type: string (nullable = true)
|-- status: string (nullable = true)
|-- user: string (nullable = true)

最新更新