我正在尝试通过Cloud SQL Proxy(不使用Hive(和使用Scala 2.11.12从Dataproc访问Cloud SQL。SO中也有类似的问题,但没有一个可以解决我面临的问题。
我已经设法将 Dataproc 连接到云 SQL,将 spark.master 置于"本地"模式,但在使用"yarn"模式时出现异常,所以我肯定错过了一些东西。
应用程序在执行以下操作时崩溃:
SparkSession
.builder()
.appName("SomeSparkJob")
.getOrCreate()
提交作业时我得到的异常,它执行上述.getOrCreate()
:
Exception in thread "main" java.lang.NoSuchFieldError: ASCII
at org.apache.hadoop.yarn.api.records.impl.pb.ApplicationSubmissionContextPBImpl.checkTags(ApplicationSubmissionContextPBImpl.java:287)
at org.apache.hadoop.yarn.api.records.impl.pb.ApplicationSubmissionContextPBImpl.setApplicationTags(ApplicationSubmissionContextPBImpl.java:302)
at org.apache.spark.deploy.yarn.Client$$anonfun$createApplicationSubmissionContext$2.apply(Client.scala:245)
at org.apache.spark.deploy.yarn.Client$$anonfun$createApplicationSubmissionContext$2.apply(Client.scala:244)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.deploy.yarn.Client.createApplicationSubmissionContext(Client.scala:244)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:180)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:183)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:501)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:926)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926)
at dev.ancor.somedataprocsparkjob.SomeSparkJob$.main(SomeSparkJob.scala:13)
at dev.ancor.somedataprocsparkjob.SomeSparkJob.main(SomeSparkJob.scala)
问题是:为什么在"yarn"模式下运行时会出现该异常,如何解决它?谢谢!
正如Gabe Weiss和David Rabinowitz所证实的那样,我们可以将Dataproc集群和Cloud SQL放在VPC网络中,只使用私有IP。无需使用云 SQL 代理。