Spark: com.mysql.jdbc.Driver不允许以select方式创建表



当试图通过spark保存到MySQL数据库时,我得到以下错误:

Py4JJavaError: An error occurred while calling o41.saveAsTable.
: java.lang.RuntimeException: com.mysql.jdbc.Driver does not allow create table as select.
    at scala.sys.package$.error(package.scala:27)
    at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:242)
    at org.apache.spark.sql.hive.execution.CreateMetastoreDataSourceAsSelect.run(commands.scala:218)
    at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:54)
    at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:54)
    at org.apache.spark.sql.execution.ExecutedCommand.execute(commands.scala:64)
    at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:1099)
    at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:1099)
    at org.apache.spark.sql.DataFrame.saveAsTable(DataFrame.scala:1121)
    at org.apache.spark.sql.DataFrame.saveAsTable(DataFrame.scala:1091)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
    at py4j.Gateway.invoke(Gateway.java:259)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:207)
    at java.lang.Thread.run(Thread.java:745)
我的python代码:
res.saveAsTable(tableName='test.provider_phones', source='com.mysql.jdbc.Driver',driver='com.mysql.jdbc.Driver', mode='append', url='jdbc:mysql://host.amazonaws.com:port/test?user=user&password=pass')

无论表是否存在,都会发生。

我使用spark 1.3.1

可以使用DataFramecreateJDBCTable(url: String, table: String, allowExisting: Boolean)insertIntoJDBC(url: String, table: String, overwrite: Boolean)函数

http://www.sparkexpert.com/2015/04/17/save-apache-spark-dataframe-to-database/

不幸的是,这在pyspark 1.3.1中是不可能的。我的解决方案是切换到scala,并使用DataFrame.insertIntoJDBC

最新更新