将外部罐子加载到火花笔记本中失败



我正在尝试从笔记本连接到红移,到目前为止我已经完成了以下操作 -

笔记本的已配置元数据

"customDeps": [
"com.databricks:spark-redshift_2.10:3.0.0-preview1",
"com.databricks:spark-avro_2.11:3.2.0",
"com.databricks:spark-csv_2.11:1.5.0"
]

已检查浏览器控制台以确保在重新启动内核后加载此库

ui-logs-1422> [Tue Aug 22 2017 09:46:26 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.m2/repository/com/databricks/spark-avro_2.10/3.0.0/spark-avro_2.10-3.0.0.jar
kernel.js:978 ui-logs-1452> [Tue Aug 22 2017 09:46:26 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.coursier/cache/v1/http/repo1.maven.org/maven2/com/databricks/spark-redshift_2.10/3.0.0-preview1/spark-redshift_2.10-3.0.0-preview1.jar
kernel.js:978 ui-logs-1509> [Tue Aug 22 2017 09:46:26 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.coursier/cache/v1/http/repo1.maven.org/maven2/com/databricks/spark-csv_2.11/1.5.0/spark-csv_2.11-1.5.0.jar
kernel.js:978 ui-logs-1526> [Tue Aug 22 2017 09:46:26 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.coursier/cache/v1/http/repo1.maven.org/maven2/com/databricks/spark-avro_2.11/3.2.0/spark-avro_2.11-3.2.0.jar
When i try to load a table - i run into class not found exception,
java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.redshift. Please find packages at http://spark.apache.org/third-party-projects.html
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:594)
at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:86)
at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:86)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:325)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
... 63 elided
Caused by: java.lang.ClassNotFoundException: com.databricks.spark.redshift.DefaultSource
at scala.reflect.internal.util.AbstractFileClassLoader.findClass(AbstractFileClassLoader.scala:62)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25$$anonfun$apply$13.apply(DataSource.scala:579)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25$$anonfun$apply$13.apply(DataSource.scala:579)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25.apply(DataSource.scala:579)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25.apply(DataSource.scala:579)
at scala.util.Try.orElse(Try.scala:84)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:579)

还有其他人遇到这个问题或已经解决了这个问题吗?

我也注意到另一个依赖项也有类似的问题,配置中缺少任何内容吗?

在笔记本中试用时序示例 - notebooks/timeseries/Spark-Timeseries.snb.ipynb 请注意自定义依赖项的元数据中的现有条目 -

"customDeps": [
"com.cloudera.sparkts % sparkts % 0.3.0"
]

快速验证此软件包的可用性@https://spark-packages.org/package/sryza/spark-timeseries (更新了元数据以包含此行(

"com.cloudera.sparkts:sparkts:0.4.1"

重新启动内核后 - 加载经过验证的库

ui-logs-337> [Wed Aug 23 2017 09:29:25 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Will fetch these customDeps artifacts:Set(Dependency(com.cloudera.sparkts:sparkts,0.3.0,,Set(),Attributes(,),false,true), Dependency(com.cloudera.sparkts:sparkts,0.4.1,,Set(),Attributes(,),false,true))
kernel.js:978 ui-logs-347> [Wed Aug 23 2017 09:29:37 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.coursier/cache/v1/http/repo1.maven.org/maven2/com/cloudera/sparkts/sparkts/0.4.1/sparkts-0.4.1.jar
Error message -
<console>:69: error: object cloudera is not a member of package com
import com.cloudera.sparkts._
^
<console>:70: error: object cloudera is not a member of package com
import com.cloudera.sparkts.stats.TimeSeriesStatisticalTests

下载了另一个版本的 spark-notebook(这不是来自 master 分支(。

spark-notebook-0.7.0-scala-2.11.8-spark-2.1.1-hadoop-2.7.2 
against 
spark-notebook-0.9.0-SNAPSHOT-scala-2.11.8-spark-2.1.1-hadoop-2.7.2

此外,我必须确保scala,Spark和Hadoop版本在我配置的依赖项中完好无损。 在这个特定示例中,我必须从命令行为 amazon JDBC redshift 驱动程序设置 jar 文件,因为这在 maven 存储库中不可用。

export EXTRA_CLASSPATH=RedshiftJDBC4-1.2.7.1003.jar

希望这对其他人有帮助

如果需要,可以将 jar 添加到内核的环境部分 "env" (EXTRA_CLASSPATH(,如下所示:

cat /usr/local/share/jupyter/kernels/apache_toree_scala/kernel.json
{
"argv": [
"/usr/local/share/jupyter/kernels/apache_toree_scala/bin/run.sh",
"--profile",
"{connection_file}"
],
"interrupt_mode": "signal",
"env": {
"__TOREE_SPARK_OPTS__": "",
"PYTHONPATH": "/opt/cloudera/parcels/SPARK2/lib/spark2/python:/opt/cloudera/parcels/SPARK2/lib/spark2/python/lib/py4j-0.10.7-src.zip",
"__TOREE_OPTS__": "",
"PYTHON_EXEC": "python",
"SPARK_HOME": "/opt/cloudera/parcels/SPARK2/lib/spark2",
"DEFAULT_INTERPRETER": "Scala",
"JAVA_HOME": "/usr/java/latest",
"EXTRA_CLASSPATH": "/opt/cloudera/parcels/SPARK2/lib/spark2/jars/mysql-connector-java-5.1.15.jar"
},
"metadata": {},
"display_name": "SPARK2/Scala",
"language": "scala"
}

最新更新