我正面临一个问题,我寻求你的帮助。我有一个任务是将JSON
文件转换为dataSet
,以便它可以加载到HIVE
中。
SparkSession spark1 = SparkSession
.builder()
.appName("File_Validation")
.config("spark.some.config.option", "some-value")
.getOrCreate();
Dataset<Row> df = spark1.read().json("input/sample.json");
df.show();
上面的代码抛出了一个NullPointerException。我试了另一种方法
代码2 JavaRDD<String> jsonFile = context.textFile("input/sample.json");
Dataset<Row> df2 = spark1.read().json(jsonFile);
df2.show();
创建RDD并传递给spark1 (sparkSession)
这段代码将json转换为另一种格式,标题为
+--------------------+
| _corrupt_record|
+--------------------+
with schema as - |-- _corrupt_record: string (nullable = true)
请帮忙修理一下。
示例JSON {
"user": "gT35Hhhre9m",
"dates": ["2016-01-29", "2016-01-28"],
"status": "OK",
"reason": "some reason",
"content": [{
"foo": 123,
"bar": "val1"
}, {
"foo": 456,
"bar": "val2"
}, {
"foo": 789,
"bar": "val3"
}, {
"foo": 124,
"bar": "val4"
}, {
"foo": 126,
"bar": "val5"
}]
}
您的JSON应该在一行中-每个对象一行一个JSON。在例子:
{ "property1: 1 }
{ "property1: 2 }
它将被读取为包含2个对象和一列的Dataset
从文档:请注意,作为json文件提供的文件不是典型的JSON文件。每行必须包含一个独立的、自包含的有效符JSON对象。因此,一个常规的多行JSON文件将
当然是用SparkSession读取数据,因为它会推断模式
你不能在spark中读取格式化的JSON
,你的JSON
应该像这样在single line
中:
{"user": "gT35Hhhre9m","dates": ["2016-01-29", "2016-01-28"],"status": "OK","reason": "some reason","content": [{"foo": 123,"bar": "val1"}, {"foo": 456,"bar": "val2"}, {"foo": 789,"bar": "val3"}, {"foo": 124,"bar": "val4"}, {"foo": 126,"bar": "val5"}]}
也可以是多行JSON:
{"name":"Michael"}
{"name":"Andy", "age":30}
{"name":"Justin", "age":19}