我使用:R版本4.1.1Sparklyr版本' 1.7.2 '
我用databrick -connect连接到我的databricks集群,并试图使用以下代码读取avro文件:
library(sparklyr)
library(dplyr)
sc <- spark_connect(
method = "databricks",
spark_home = "my_spark_home_path",
version = "3.1.1",
packages = c("avro")
)
df_path = "s3a://my_s3_path"
df = spark_read_avro(sc, path = df_path, memory = FALSE)
我还尝试显式地添加包:
library(sparklyr)
library(dplyr)
sc <- spark_connect(
method = "databricks",
spark_home = "my_spark_home_path",
version = "3.1.1",
packages = "org.apache.spark:spark-avro_2.12:3.1.1"
)
df_path = "s3a://my_s3_path"
df = spark_read_avro(sc, path = df_path, memory = FALSE)
火花连接正在工作,我可以正常读取parquet文件,但在读取avro文件时,我总是得到:
Error in validate_spark_avro_pkg_version(sc) :
Avro support must be enabled with `spark_connect(..., version = <version>, packages = c("avro", <other package(s)>), ...)` or by explicitly including 'org.apache.spark:spark-avro_2.12:3.1.1-SNAPSHOT' for Spark version 3.1.1-SNAPSHOT in list of packages
有人知道如何解决这个问题吗?
我找到了一个使用sparkavropackage的解决方案:
library(sparklyr)
library(dplyr)
library(sparkavro)
sc <- spark_connect(
method = "databricks",
spark_home = "my_spark_home_path")
df_path = "s3a://my_s3_path"
df = spark_read_avro(
sc,
path = df_path,
name = "my_table_name",
memory = FALSE)