如何将变量传递给 spark.sql 在 pyspark 中查询



如何在pyspark中将变量传递给spark.sql查询?当我查询表时,它失败并显示AnalysisException.为什么?

>>> spark.sql("select * from student").show()
+-------+--------+
|roll_no|    name|
+-------+--------+
|      1|ravindra|
+-------+--------+
>>> spark.sql("select * from student where roll_no={0} and name={1}".format(id,name)).show()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/spark-2.3.0-bin-hadoop2.6/python/pyspark/sql/session.py", line 767, in sql
    return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
  File "/usr/local/spark-2.3.0-bin-hadoop2.6/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
  File "/usr/local/spark-2.3.0-bin-hadoop2.6/python/pyspark/sql/utils.py", line 69, in deco
    raise AnalysisException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.AnalysisException: u"cannot resolve '`ravindra`' given input columns: [default.student.id, default.student.roll_no, default.student.name]; line 1 pos 47;n'Project [*]n+- 'Filter ((roll_no#21 = 0) && (name#22 = 'ravindra))n   +- SubqueryAlias `default`.`student`n      +- HiveTableRelation `default`.`student`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [id#20, roll_no#21, name#22]n"

我通常在 sql 字符串中使用%s字符串格式化程序

sqlc.sql('select * from students where roll_no=%s and name="%s"' % ('1', 'ravindra')).show()

查看您的 sql 回溯,当ravindra传递给 sql 字符串时,您一定错过了 name= 值的引号,并且 sql 引擎将其视为变量调用。

然后,您的 sql 查询将变为

select * from students where roll_no=1 and name=ravindra  -- no quotes

您可以将 sql 字符串调整为

spark.sql("select * from student where roll_no={0} and name='{1}'".format(id,name)).show()

引用您的{1}以获得您想要的结果。

相关内容

  • 没有找到相关文章

最新更新