java.lang.RuntimeException: 编码时出错: java.lang.ArrayIndexOutOf



当我尝试连接来自数据库和csv文件的两个数据集时出现错误 错误消息如下所示:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14.0 failed 1 times, most recent failure: Lost task 0.0 in stage 14.0 (TID 14, localhost, executor driver): java.lang.RuntimeException: Error while encoding: java.lang.ArrayIndexOutOfBoundsException: 1
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, targetString), StringType), true, false) AS targetString#205
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, deviceName), StringType), true, false) AS deviceName#206
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, alarmDetectionCode), StringType), true, false) AS alarmDetectionCode#207
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:292)
at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:593)
at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:593)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write

当 Spark 应用程序连接两个数据集时,看起来发生了不匹配 使用不同的模式,但我不知道它是怎么发生的。 我的java代码是这样的:

Dataset result = null;
result = deviceInfoDataset.join(searchInfo,deviceInfoDataset.col("deviceName").equalTo(searchInfo.col("deviceName")));
result.show();

数据集架构:

device
+--------+----------+----------+
|ctgry_cd|deviceInfo|deviceName|
+--------+----------+----------+
searchinfo
+------------+----------+------------------+
|targetString|deviceName|alarmDetectionCode|
+------------+----------+------------------+

这个问题似乎比我想象的要复杂。原因在我的场合有两个。 1.My 数据集有一个来自 csv 的空行。在这种情况下,我可以使用以下代码创建并显示此数据集:

SparkSession ss = sparkContextManager.createThreadLocalSparkSession(functionId);
JavaSparkContext jsc = new JavaSparkContext(ss.sparkContext());
for (String fieldName : columns) {
StructField field = DataTypes
.createStructField(fieldName, DataTypes.StringType, true);
fields.add(field);
}
StructType schema = DataTypes.createStructType(fields);
List<String[]> tmpContent = LocalFileUtilsCustomize.readCsv(tempPath);
List<Row> content = jsc.parallelize(tmpContent).map(l -> RowFactory.create((Object[])l)).collect();
Dataset<Row> searchInfo= ss.createDataFrame(content,schema);
searchInfo.show();

但是当我尝试连接两个数据集并显示它们时,我遇到了此错误。 然后我尝试删除空白行,但仍然出现错误。 至少,我意识到即使我设置了"nullable=true",我也必须确保所有 csv 行都具有相同数量的带有"模式"的列。 所以这个问题的解决方案是这样的:

SparkSession ss = sparkContextManager.createThreadLocalSparkSession(functionId);
JavaSparkContext jsc = new JavaSparkContext(ss.sparkContext());
for (String fieldName : columns) {
StructField field = DataTypes
.createStructField(fieldName, DataTypes.StringType, true);
fields.add(field);
}
StructType schema = DataTypes.createStructType(fields);
List<String[]> tmpContent = LocalFileUtilsCustomize.readCsv(tempPath);
List<Row> content = new ArrayList<>();
for(String[] s :tmpContent) {
Row r = null;
if(s[0].isEmpty() ) {
continue;
}
if(s.length < columns.size()) {
String[] tmpS = new String[columns.size()];
System.arraycopy(s, 0, tmpS, 0, s.length);
r = RowFactory.create((Object[])tmpS);
}else {
r = RowFactory.create((Object[])s);
}
content.add(r);
}
Dataset<Row> searchInfo= ss.createDataFrame(content,schema);
searchInfo.show();
Dataset result = null;
result = deviceInfoDataset.join(searchInfo,deviceInfoDataset.col("deviceName").equalTo(searchInfo.col("deviceName")));
result.show();

相关内容

最新更新