获取 PSQLException: 错误:在 Spark jdbc 中使用查询而不是表名与 Postgres 一起使用时,语法错误在 "SELECT" 或接近



对于以下通用sql:

showTablesSql =  """SELECT table_catalog,table_schema,table_name
FROM information_schema.tables
ORDER BY table_schema,table_name"""

当它被提交给spark jdbc用于postgresql时,会发生以下异常:

py4j.protocol.Py4JJavaError: An error occurred while calling o34.load.
: org.postgresql.util.PSQLException: ERROR: syntax error at or near "SELECT"
Position: 15
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2578)

这是正在使用的代码:

url =  f"jdbc:postgresql://{c['db.host']}/{c['db.name']}?user={c['db.user']}&password={c['db.password']}"
print(url)
empDF = spark.read 
.format("jdbc") 
.option("url", url) 
.option("dbtable", showTablesSql) 
.option("user", c['db.user']) 
.option("password", c['db.password']) 
.load()

以下是堆栈跟踪详细信息:

Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
jdbc:postgresql://localhost/bluej?user=bluej&password=mypassword
Traceback (most recent call last):
File "/git/bluej/fusion/python/pointr/bluej/util/sparkmgr.py", line 37, in <module>
tab = readTab(db, tname)
File "/git/bluej/fusion/python/pointr/bluej/util/sparkmgr.py", line 23, in readTab
empDF = spark.read 
File "/shared/spark3/python/pyspark/sql/readwriter.py", line 166, in load
return self._df(self._jreader.load())
File "/shared/spark3/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1285, in __call__
File "/shared/spark3/python/pyspark/sql/utils.py", line 98, in deco
return f(*a, **kw)
File "/shared/spark3/python/lib/py4j-0.10.8.1-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o34.load.
: org.postgresql.util.PSQLException: ERROR: syntax error at or near "SELECT"
Position: 15
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2578)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2313)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:331)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:448)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:369)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:159)
at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:109)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:61)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:226)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:339)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:240)
at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:229)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:229)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:179)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:834)

在评论@BjarniRagnarsson中暗示dbtable实际上是subquery。我从备受尊敬的@zero323 上找到了一些关于这方面的信息

https://stackoverflow.com/a/32629170/1056563

由于dbtable被用作SELECT语句的源,因此它的形式对普通SQL查询有效。如果你想使用子查询,你应该在括号里传递一个查询,并提供一个别名:

USING org.apache.spark.sql.jdbc
OPTIONS (
url "jdbc:postgresql:dbserver",
dbtable "(SELECT * FROM mytable) tmp"
);

在将sql设置为subquery之后,我看到它得到了正确的解析:还没有数据返回,但很可能会返回。

最新更新