我正在使用pyspark开发ETL应用程序。我已经完成了实现,当在我的数据集上运行它时,它工作得很好。然而,我尝试使用整个数据集(2.5 GB的文本),我得到了这样的错误:
[Stage 137:============>(793 + 7) / 800][Stage 139:> (0 + 1) / 800]Traceback (most recent call last):
File "c:sparkpythonlibpyspark.zippysparkjava_gateway.py", line 169, in local_connect_and_auth
File "c:sparkpythonlibpyspark.zippysparkjava_gateway.py", line 144, in _do_server_auth
File "c:sparkpythonlibpyspark.zippysparkserializers.py", line 653, in loads
File "c:sparkpythonlibpyspark.zippysparkserializers.py", line 690, in read_int
File "C:UsersusernameAppDataLocalContinuumminiconda3libsocket.py", line 586, in readinto
return self._sock.recv_into(b)
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:UsersusernameAppDataLocalContinuumminiconda3librunpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:UsersusernameAppDataLocalContinuumminiconda3librunpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:sparkpythonlibpyspark.zippysparkworker.py", line 290, in <module>
File "c:sparkpythonlibpyspark.zippysparkjava_gateway.py", line 172, in local_connect_and_auth
NameError: name '_exception_message' is not defined
19/01/05 10:53:28 ERROR Utils: Uncaught exception in thread stdout writer for C:UsersusernameAppDataLocalContinuumminiconda3python.exe
java.net.SocketException: socket already closed
at java.net.TwoStacksPlainSocketImpl.socketShutdown(Native Method)
at java.net.AbstractPlainSocketImpl.shutdownOutput(AbstractPlainSocketImpl.java:580)
at java.net.PlainSocketImpl.shutdownOutput(PlainSocketImpl.java:258)
at java.net.Socket.shutdownOutput(Socket.java:1556)
at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1$$anonfun$apply$2.apply$mcV$sp(PythonRunner.scala:263)
at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1$$anonfun$apply$2.apply(PythonRunner.scala:263)
at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1$$anonfun$apply$2.apply(PythonRunner.scala:263)
at org.apache.spark.util.Utils$.tryLog(Utils.scala:2005)
at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:263)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1992)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:170)
19/01/05 10:53:28 ERROR Executor: Exception in task 797.0 in stage 137.0 (TID 24032)
java.net.SocketException: Connection reset by peer: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:211)
at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223)
at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:223)
at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:439)
at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:247)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1992)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:170)
19/01/05 10:53:28 ERROR Executor: Exception in task 796.0 in stage 137.0 (TID 24031)
org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:148)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:76)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:86)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:67)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Software caused connection abort: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:134)
at java.io.DataOutputStream.writeInt(DataOutputStream.java:198)
at org.apache.spark.security.SocketAuthHelper.writeUtf8(SocketAuthHelper.scala:96)
at org.apache.spark.security.SocketAuthHelper.authClient(SocketAuthHelper.scala:57)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:143)
... 31 more
19/01/05 10:53:29 ERROR TaskSetManager: Task 797 in stage 137.0 failed 1 times; aborting job
Traceback (most recent call last):
File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 476, in <module>
Main(sys.argv[1:])
File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 471, in __init__
for reportName, report in dataObj.generateReports(sqlContext):
File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 443, in generateReports
report = reportGenerator(sqlContext, commonSchema)
File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 378, in generateByCycleReport
**self.generateStats(contributionsByCycle[cycle])})
File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 424, in generateStats
stats[columnName] = aggregator(self.dataFrames['demographics'][demographicId])
File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 282, in totalContributed
return df.agg({"amount": "sum"}).collect()[0]['sum(amount)'] or 0
File "C:UsersusernameAppDataLocalContinuumminiconda3libsite-packagespysparksqldataframe.py", line 466, in collect
sock_info = self._jdf.collectToPython()
File "C:UsersusernameAppDataLocalContinuumminiconda3libsite-packagespy4jjava_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "C:UsersusernameAppDataLocalContinuumminiconda3libsite-packagespysparksqlutils.py", line 63, in deco
return f(*a, **kw)
File "C:UsersusernameAppDataLocalContinuumminiconda3libsite-packagespy4jprotocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o273.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 797 in stage 137.0 failed 1 times, most recent failure: Lost task 797.0 in stage 137.0 (TID 24032, localhost, executor driver): java.net.SocketException: Connection reset by peer: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:211)
at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223)
at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:223)
at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:439)
at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:247)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1992)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:170)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3200)
at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3197)
at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Connection reset by peer: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:211)
at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223)
at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:223)
at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:439)
at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:247)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1992)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:170)
[Stage 137:============>(793 + 5) / 800][Stage 139:> (0 + 2) / 800]
请注意,这只是发生错误的一个实例,错误本身、失败的位置和时间并不一致。我相信这与我的项目设置有关,而不是实现本身。这些错误似乎唯一的共同点是ERROR Utils: Uncaught exception in thread stdout writer for C:UsersusernameAppDataLocalContinuumminiconda3python.exe
。
我不知道为什么会发生这种情况,因为几乎没有任何对我的实现的引用,对我的代码的单堆栈跟踪给出了消息java.net.SocketException: Connection reset by peer: socket write error
,这不是我所理解的。
我已经查看了关于PySpark的其他StackOverflow问题,虽然我没有找到与我的问题相匹配的问题,但可伸缩性问题似乎又回到了配置中。这是我在每次运行中看到的配置:
spark.driver.memory: 12g
spark.driver.port: 51126
spark.executor.id: driver
spark.driver.maxResultSize: 12g
spark.memory.offHeap.size: 12g
spark.memory.offHeap.enabled: true
spark.executor.memory: 12g
spark.executor.heartbeatInterval: 36000000s
spark.executor.cores: 4
spark.driver.host: <redacted>
spark.rdd.compress: True
spark.network.timeout: 60000000s
spark.serializer.objectStreamReset: 100
spark.app.name: <redacted>
spark.master: local[*]
spark.submit.deployMode: client
spark.app.id: local-1546685579638
spark.memory.fraction: 0
spark.ui.showConsoleProgress: true
任何关于这个问题的帮助都是值得赞赏的,还有我的系统的细节:
- Python 3.6(通过Anaconda)
- PySpark 2.3.2(使用内置java类,没有本地Hadoop)
- PyCharm CE 2018.3.1
- Windows 10(16GB内存,8核)
我看到一个主套接字超时错误。尝试将spark.executor.heartbeatInterval
增加到3600。
在定义conf
变量后的第行,将其包含在代码中,并尝试一下。它应该起作用。
conf.set("spark.executor.heartbeatInterval","3600s")