如何在使用 PySpark 时将 jdbc 驱动程序添加到类路径



如何/在哪里安装Spark SQL的jdbc驱动程序?我正在运行全 spark-notebook docker 映像,并尝试直接从 sql 数据库中提取一些数据到 Spark 中。

据我所知,我可以说我需要在我的Classpath中包含驱动程序,我只是不确定如何从pyspark做到这一点?

from pyspark.sql import SparkSession
spark = SparkSession 
    .builder 
    .master("local") 
    .appName("Python Spark SQL basic example") 
    .getOrCreate()
jdbcDF = spark.read 
    .format("jdbc") 
    .option("url", "jdbc:postgresql:dbserver") 
    .option("dbtable", "jdbc:postgresql:dbserver") 
    .load()
---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-2-f3b08ff6d117> in <module>()
      2 spark = SparkSession     .builder     .master("local")     .appName("Python Spark SQL basic example")     .getOrCreate()
      3 
----> 4 jdbcDF = spark.read     .format("jdbc")     .option("url", "jdbc:postgresql:dbserver")     .option("dbtable", "jdbc:postgresql:dbserver")     .load()
/usr/local/spark/python/pyspark/sql/readwriter.py in load(self, path, format, schema, **options)
    163             return self._df(self._jreader.load(self._spark._sc._jvm.PythonUtils.toSeq(path)))
    164         else:
--> 165             return self._df(self._jreader.load())
    166 
    167     @since(1.4)
/usr/local/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1131         answer = self.gateway_client.send_command(command)
   1132         return_value = get_return_value(
-> 1133             answer, self.gateway_client, self.target_id, self.name)
   1134 
   1135         for temp_arg in temp_args:
/usr/local/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()
/usr/local/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    317                 raise Py4JJavaError(
    318                     "An error occurred while calling {0}{1}{2}.n".
--> 319                     format(target_id, ".", name), value)
    320             else:
    321                 raise Py4JError(
Py4JJavaError: An error occurred while calling o36.load.
: java.sql.SQLException: No suitable driver
    at java.sql.DriverManager.getDriver(DriverManager.java:315)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:83)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:306)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)

为了包含 postgresql 的驱动程序,您可以执行以下操作:

from pyspark.conf import SparkConf
conf = SparkConf()  # create the configuration
conf.set("spark.jars", "/path/to/postgresql-connector-java-someversion-bin.jar")  # set the spark.jars
...
spark = SparkSession.builder 
        .config(conf=conf)   # feed it to the session here
        .master("local") 
        .appName("Python Spark SQL basic example") 
        .getOrCreate()

现在,由于您使用的是 Docker,我想您必须挂载包含驱动程序 jar 的文件夹并引用挂载的文件夹。(例如:如何在 Docker 容器中挂载主机目录)

希望这有帮助,祝你好运!

编辑:一种不同的方法是在使用这样的spark-submit时给出--driver-class-path参数:

spark-submit --driver-class-path=path/to/postgresql-connector-java-someversion-bin.jar file_to_run.py

但我猜这不是你将如何运行这个。

将驱动程序放入 pyspark 路径是有效的,但正确的方法是添加以下行:

conf = pyspark.SparkConf().setAll([('spark.executor.id', 'driver'), 
                               ('spark.app.id', 'local-1631738601802'), 
                               ('spark.app.name', 'PySparkShell'), 
                               ('spark.driver.port', '32877'), 
                               ('spark.sql.warehouse.dir', 'file:/home/data_analysis_tool/spark-warehouse'), 
                               ('spark.driver.host', 'localhost'), 
                               ('spark.sql.catalogImplementation', 'hive'), 
                               ('spark.rdd.compress', 'True'), 
                               ('spark.driver.bindAddress', 'localhost'), 
                               ('spark.serializer.objectStreamReset', '100'), 
                               ('spark.master', 'local[*]'), 
                               ('spark.submit.pyFiles', ''), 
                               ('spark.app.startTime', '1631738600836'), 
                               ('spark.submit.deployMode', 'client'), 
                               ('spark.ui.showConsoleProgress', 'true'),
                               ('spark.driver.extraClassPath','/tmp/postgresql-42.2.23.jar')])

请注意以下行:

('spark.driver.extraClassPath','/tmp/postgresql-42.2.23.jar')

这是整个代码:

import psycopg2
import pandas as pd
import pyspark
from pyspark.sql import SparkSession
from sqlalchemy import create_engine
import qgrid
#appName = "PySpark PostgreSQL Example - via psycopg2"
#master = "local"
#spark = SparkSession.builder.master(master).appName(appName).getOrCreate()

conf = pyspark.SparkConf().setAll([('spark.executor.id', 'driver'), 
                                   ('spark.app.id', 'local-1631738601802'), 
                                   ('spark.app.name', 'PySparkShell'), 
                                   ('spark.driver.port', '32877'), 
                                   ('spark.sql.warehouse.dir', 'file:/home/data_analysis_tool/spark-warehouse'), 
                                   ('spark.driver.host', 'localhost'), 
                                   ('spark.sql.catalogImplementation', 'hive'), 
                                   ('spark.rdd.compress', 'True'), 
                                   ('spark.driver.bindAddress', 'localhost'), 
                                   ('spark.serializer.objectStreamReset', '100'), 
                                   ('spark.master', 'local[*]'), 
                                   ('spark.submit.pyFiles', ''), 
                                   ('spark.app.startTime', '1631738600836'), 
                                   ('spark.submit.deployMode', 'client'), 
                                   ('spark.ui.showConsoleProgress', 'true'),
                                   ('spark.driver.extraClassPath','/tmp/postgresql-42.2.23.jar')])

sc = pyspark.SparkContext(conf=conf)
sc.getConf().getAll()
sparkSession = SparkSession (sc)
sparkDataFrame = sparkSession.read.format("jdbc") 
    .options(
    url="jdbc:postgresql://localhost:5432/Database",
    dbtable="test_features_3",
    user="database_user",
    password="Pa$$word").load()
print (sparkDataFrame.count())
sc.stop()

相关内容

  • 没有找到相关文章

最新更新