我正在使用在 AWS EMR 5.30.0 上运行的 Spark 2.4.5,其中包含 r5.4xlarge 实例(16 个 vCore、128 GiB 内存、仅 EBS 存储、EBS 存储:256 GiB(:1 个主节点、1 个核心和 30 个任务。
我在主节点上启动了 Spark Thrift Server,它是集群上运行的唯一作业
sudo /usr/lib/spark/sbin/start-thriftserver.sh --conf spark.blacklist.enabled=true --conf spark.blacklist.stage.maxFailedExecutorsPerNode=4 --conf spark.blacklist.task.maxTaskAttemptsPerNode=3 --conf spark.driver.cores=12 --conf spark.driver.maxResultSize=10g --conf spark.driver.memory=86000M --conf spark.driver.memoryOverhead=10240 --conf spark.kryoserializer.buffer.max=768m --conf spark.rpc.askTimeout=700 --conf spark.sql.broadcastTimeout=800 --conf spark.sql.sources.partitionOverwriteMode=dynamic --conf spark.task.maxFailures=20
然后我用 JDBC 启动 SQL 查询,但是当运行大量查询时,UI 会变得非常慢。我认为如果我把 spark.driver.cores=12(主节点中有 16 个(和 spark.driver.memory=86000M(有 128GB 内存(留出一些余量让主节点能够运行其他进程(如历史记录服务器(会很好,但它仍然很慢。
所以我想我可以编辑其他设置以使 UI 正常工作,但我不确定是什么。
这些是群集中 spark-defaults.conf 中的设置,仅供参考:
spark.driver.extraClassPath/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/goodies/lib/emr-spark-goodies.jar:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/usr/share/aws/emr/s3select/lib/emr-s3-select-spark-connector.jar spark.driver.extraLibraryPath/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native spark.executor.extraClassPath/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/goodies/lib/emr-spark-goodies.jar:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/usr/share/aws/emr/s3select/lib/emr-s3-select-spark-connector.jar spark.executor.extraLibraryPath/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native spark.eventLog.enabled true spark.eventLog.dir hdfs:///var/log/spark/apps spark.history.fs.logDirectory hdfs:///var/log/spark/apps 火花.sql.仓库.dir hdfs:///user/spark/warehouse spark.sql.hive.metastore.sharedPrefixes com.amazonaws.services.dynamodbv2 spark.yarn.history服务器.address:18080 spark.history.ui.port 18080 spark.shuffle.service.enabled true spark.yarn.dist.files/etc/spark/conf/hive-site.xml spark.driver.extraJavaOptions -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadEnabled -XX:OnOutOfMemoryError='kill -9 %p' spark.dynamicAllocation.enabled true spark.blacklist.decommissioning.enabled true spark.blacklist.deposting.timeout 1h spark.resourceManager.cleanupExpiredHost true spark.stage.try.ignoreOnDecommissionFetchFailure true spark.decommissioning.timeout.threshold 20 spark.executor.extraJavaOptions -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadEnabled -XX:OnOutOfMemoryError='kill -9 %p' spark.hadoop.yarn.timeline-service.enabled false spark.yarn.appMasterEnv.SPARK_PUBLIC_DNS $(主机名 -f( spark.files.fetchFailure.unRegisterOutputOnHost true spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version.emr_internal_use_only。Emr文件系统 2 spark.hadoop.mapreduce.fileoutputcommitter.cleanup-failures.ignored.emr_internal_use_only.Emr文件系统 true spark.hadoop.fs.s3.getObject.initialSocketTimeoutMilliseconds 2000 spark.sql.parquet.output.committer.class com.amazon.emr.committer.EmrOptimizedSparkSqlParquetOutputCommitter spark.sql.parquet.fs.optimized.committer.optimization-enabled true spark.sql.emr.internal.extensions com.amazonaws.emr.spark.EmrSparkSessionExtensions spark.sql.sources.partitionOverwriteMode dynamic spark.executor.instances 1 Spark.executor.cores 16 火花驱动内存 2048M spark.executor.memory 109498M spark.default.parallelism 32 spark.emr.maximizeResourceAllocation true'''
问题在于只有 1 个核心实例,因为日志保存在 HDFS 中,因此此实例成为瓶颈。 我添加了另一个核心实例,现在情况好多了。
另一种解决方案可能是将日志保存到 S3/S3A 而不是 HDFS,在 spark-defaults.conf 中更改这些参数(确保它们也在 UI 配置中更改(,但可能需要添加一些 JAR 文件才能工作。
spark.eventLog.dir hdfs:///var/log/spark/apps
spark.history.fs.logDirectory hdfs:///var/log/spark/apps