左连接在两个DataFrames给(org.apache.spark.sql错误不能应用.数据集,o



我可以读取它们两个数据帧,但是将它们连接起来会给我一个错误,我可以将其加入到笔记本中

val s3Reader = new S3Reader(new S3Configuration, sparkSession, "mece_gaia_gaia_property_mapping")
val geoFeaturesPropertyDF = s3Reader.get(StorageFormat.PARQUET, "s3n:" + giNewBucket + geoInsightsPath + "/properties.parquet")
val meceGaiaGaia = s3Reader.get(StorageFormat.PARQUET, "s3:" + outputBucket + gaiaMeceGaiaPropertiesMappingPath)
val meceGaiaGaiaProperties = geoFeaturesPropertyDF.join(meceGaiaGaia, meceGaiaGaia("gaia_id") === geoFeaturesPropertyDF("gaia_id"), "left")

但是当我加入他们时,我得到了一个错误

error: overloaded method value join with alternatives:
[ERROR]   (right: org.apache.spark.sql.Dataset[_],joinExprs: org.apache.spark.sql.Column,joinType: String)org.apache.spark.sql.DataFrame <and>
[ERROR]   (right: org.apache.spark.sql.Dataset[_],usingColumns: Seq[String],joinType: String)org.apache.spark.sql.DataFrame
[ERROR]  cannot be applied to (org.apache.spark.sql.Dataset, org.apache.spark.sql.Column, String)
[ERROR]             .join(meceGaiaGaia, meceGaiaGaia("gaia_id") === geoFeaturesPropertyDF("gaia_id"), "left")

它们的Schema

meceGaiaGaia Schema -

org.apache.spark.sql.types.StructType = StructType(StructField(gaia_id,StringType,true), StructField(short_name,StringType,true), StructField(long_name,StringType,true), StructField(category,StringType,true), StructField(expe_property_id,IntegerType,true), StructField(airport_code,StringType,true), StructField(mece_gaia_id,StringType,true), StructField(mece_short_name,StringType,true), StructField(mece_long_name,StringType,true), StructField(mece_category,StringType,true), StructField(province_id,StringType,true), StructField(province,StringType,true), StructField(country_id,StringType,true), StructField(country,StringType,true), StructField(continent,StringType,true), StructField(super_region,StringType,true))

geoFeaturesPropertyDF模式

org.apache.spark.sql.types.StructType = StructType(StructField(gaia_id,StringType,true), StructField(source_id,StringType,true), StructField(type,StringType,true), StructField(status,StringType,true), StructField(creation_time,StringType,true), StructField(update_time,StringType,true), StructField(attributes,MapType(StringType,StringType,true),true), StructField(ancestors_id,StringType,true), StructField(hierarchy,ArrayType(MapType(StringType,StringType,true),true),true), StructField(categories,ArrayType(StringType,true),true), StructField(classifiers_set,MapType(StringType,ArrayType(MapType(StringType,StringType,true),true),true),true), StructField(short_name,StringType,true), StructField(long_name,StringType,true), StructField(ancestors,ArrayType(StringType,true),true), StructFi

感谢您的帮助

val meceGaiaGaiaProperties =
geoFeaturesPropertyDF.join(meceGaiaGaia,
geoFeaturesPropertyDF("gaia_id" === meceGaiaGaia("gaia_id")),
"left")

更新的代码使用

sparkSession.read.parquet代替S3Reader,这是有效的

val geoFeaturesPropertyDF = sparkSession.read.parquet("s3n:" + giNewBucket + geoInsightsPath + "/properties.parquet")
val meceGaiaGaia = sparkSession.read.parquet("s3:" + outputBucket + gaiaMeceGaiaPropertiesMappingPath)
val meceGaiaGaiaProperties = geoFeaturesPropertyDF.join(meceGaiaGaia, meceGaiaGaia("gaia_id") === geoFeaturesPropertyDF("gaia_id"), "left")

最新更新