pyspark:带有spark-submit的ship jar依赖项



我写了一个pyspark脚本,它读取两个json文件,coGroup它们,并将结果发送到elasticsearch集群;当我在本地运行它时,一切(大部分)都如预期的那样工作,我下载了org.elasticsearch.hadoop.mr.EsOutputFormatorg.elasticsearch.hadoop.mr.LinkedMapWritable类的elasticsearch-hadoopjar文件,然后使用--jars参数使用pyspark运行我的作业,我可以看到文档出现在我的弹性搜索集群中。

然而,当我试图在星火集群上运行它时,我会得到这样的错误:

Traceback (most recent call last):
  File "/root/spark/spark_test.py", line 141, in <module>
    conf=es_write_conf
  File "/root/spark/python/pyspark/rdd.py", line 1302, in saveAsNewAPIHadoopFile
    keyConverter, valueConverter, jconf)
  File "/root/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  File "/root/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.saveAsNewAPIHadoopFile.
: java.lang.ClassNotFoundException: org.elasticsearch.hadoop.mr.LinkedMapWritable
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:274)
    at org.apache.spark.util.Utils$.classForName(Utils.scala:157)
    at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1$$anonfun$apply$9.apply(PythonRDD.scala:611)
    at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1$$anonfun$apply$9.apply(PythonRDD.scala:610)
    at scala.Option.map(Option.scala:145)
    at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1.apply(PythonRDD.scala:610)
    at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1.apply(PythonRDD.scala:609)
    at scala.Option.flatMap(Option.scala:170)
    at org.apache.spark.api.python.PythonRDD$.getKeyValueTypes(PythonRDD.scala:609)
    at org.apache.spark.api.python.PythonRDD$.saveAsNewAPIHadoopFile(PythonRDD.scala:701)
    at org.apache.spark.api.python.PythonRDD.saveAsNewAPIHadoopFile(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
    at py4j.Gateway.invoke(Gateway.java:259)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:207)
    at java.lang.Thread.run(Thread.java:745)

这对我来说似乎很清楚:elasticsearch-hadoop罐子在工人身上不可用;那么问题来了:我该如何将它与我的应用程序一起发送呢?我可以将sc.addPyFile用于python依赖项,但它不适用于jar,并且使用spark-submit--jars参数也没有帮助。

--jars刚刚工作;问题是我首先如何运行CCD_ 11作业;正确的执行方式是:

./bin/spark-submit <options> scriptname

因此,--jars选项必须放在脚本之前:

./bin/spark-submit --jars /path/to/my.jar myscript.py

如果您认为这是向脚本本身传递参数的唯一方法,因为脚本名称之后的所有内容都将用作脚本的输入参数,那么这一点就很明显了:

./bin/spark-submit --jars /path/to/my.jar myscript.py --do-magic=true

最新更新