Spark Hive Context - 带有分区和大写字段名称的 Avro 表



对于分区的 Avro Hive 表,在 Avro 架构中包含大写字符的字段名称将作为 null 拉回。我想知道我是否缺少一些设置/解决方法,或者这只是 Hive 上下文的错误。

我已经尝试将以下内容添加到 DDL:

 WITH SERDEPROPERTIES ('casesensitive'='FieldName')

。和设置火花.sql.大小写对真/假敏感

Spark 版本 1.5.0配置单元版本 1.1.0

可以通过在 Hive 中运行以下 DDL 来重现此问题:

-- Hive DDL using partitions
CREATE TABLE avro_partitions (Field string)
PARTITIONED BY (part string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
TBLPROPERTIES ('avro.schema.literal'=
  '{ "type":"record", "name":"avro_partitions", "namespace":"default", "fields":[ {"name":"Field", "type":"string"} ] }');
INSERT INTO avro_partitions PARTITION (part='01') VALUES('test');
-- Hive DDL without partitions
CREATE TABLE avro_no_partitions (Field string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
TBLPROPERTIES ('avro.schema.literal'=
  '{ "type":"record", "name":"avro_no_partitions", "namespace":"default", "fields":[ {"name":"Field", "type":"string"} ] }');
INSERT INTO avro_no_partitions VALUES('test');

。然后尝试使用Spark SQL(spark-shell)从表中进行选择:

sqlContext.sql("select * from default.avro_partitions").show
+-----+----+
|field|part|
+-----+----+
| null|  01|
+-----+----+
sqlContext.sql("select * from default.avro_no_partitions").show
+-----+
|field|
+-----+
| test|
+-----+

问题是在 TBLPROPERTIES 中指定 avro.schema.literal - 它应该在 SERDEPROPERTIES中指定:

CREATE TABLE avro_partitions (Field string)
PARTITIONED BY (part string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
WITH SERDEPROPERTIES ('avro.schema.literal'='{ "type":"record", "name":"avro_partitions", "namespace":"default", "fields":[ {"name":"Field", "type":"string"} ] }')
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat';
INSERT INTO avro_partitions PARTITION (part='01') VALUES('test');

Spark 版本 1.6.0

相关内容

  • 没有找到相关文章

最新更新