amazon s3 - Spark 2.0 s3元数据加载挂起在多个数据帧读取



我们目前正在评估从spark 1.6升级到spark 2.0,但是有一个非常奇怪的错误阻碍了我们进行转换。

我们的需求之一是从S3读取多个数据点并将它们合并在一起。当我们加载50个数据集时,没有问题。然而,在第51次数据集加载时,一切都挂起寻找元数据。这不是间歇性的,而是持续发生的。

数据格式为avro容器,我们使用spark-avro 3.0.0。

这个问题有答案吗?

  • 这与套接字超时问题无关,所有套接字线程都没有阻塞。

<<main thread dump>>
java.lang.Thread.sleep(Native Method)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.doPauseBeforeRetry(AmazonHttpClient.java:1475)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.pauseBeforeRetry(AmazonHttpClient.java:1439)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:794)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3826)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1015)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:991)
com.amazon.ws.emr.hadoop.fs.s3n.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:212)
sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
com.sun.proxy.$Proxy36.retrieveMetadata(Unknown Source)
com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:780)
org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1428)
com.amazon.ws.emr.hadoop.fs.EmrFileSystem.exists(EmrFileSystem.java:313)
org.apache.spark.sql.execution.datasources.DataSource.hasMetadata(DataSource.scala:289)
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:324)
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:132)

似乎avro-spark通过不释放连接来耗尽连接池。

https://github.com/databricks/spark-avro/issues/156

最新更新