我在云存储中有大约30Gb的数据,我想从Dataproc集群使用Apache Hive进行查询。访问此数据的最佳策略是什么。最好的方法是通过 gsutil 将数据复制到我的主节点并从那里访问它,还是我可以直接在云存储中访问它?如果是后者,如何在 Spark CLI 中指定其位置?我可以指定吗
location 'gs://<bucketname>'
当我跑步时
CREATE EXTERNAL TABLE
?
您应该能够创建一个直接指向云存储中的数据的外部表。这应该适用于Hive和Spark SQL。在许多情况下,这可能是最好的策略。
下面是一个基于云存储中的公共数据集的示例。
CREATE EXTERNAL TABLE natality_csv (
source_year BIGINT, year BIGINT, month BIGINT, day BIGINT, wday BIGINT,
state STRING, is_male BOOLEAN, child_race BIGINT, weight_pounds FLOAT,
plurality BIGINT, apgar_1min BIGINT, apgar_5min BIGINT,
mother_residence_state STRING, mother_race BIGINT, mother_age BIGINT,
gestation_weeks BIGINT, lmp STRING, mother_married BOOLEAN,
mother_birth_state STRING, cigarette_use BOOLEAN, cigarettes_per_day BIGINT,
alcohol_use BOOLEAN, drinks_per_week BIGINT, weight_gain_pounds BIGINT,
born_alive_alive BIGINT, born_alive_dead BIGINT, born_dead BIGINT,
ever_born BIGINT, father_race BIGINT, father_age BIGINT,
record_weight BIGINT
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
LOCATION 'gs://public-datasets/natality/csv'
诚然,根据对您的问题的评论,我不确定我是否遗漏了您问题的另一部分。