我正在尝试使用spark应用程序访问s3数据。我正在应用Spark SQL来检索数据。
DataFrame summaryQuery=sql.sql("Select score from summary order by updationDate desc);
summaryQuery.groupBy("sessionId").count().show();
summaryQuery.show();
我也在直接尝试
DataFrame summaryQuery=sql.sql("Select score from summary group by sessionId order by updationDate desc);
summaryquery.show();
但在这两种情况下,我得到SQL异常。
Exception in thread "main" org.apache.spark.sql.AnalysisException: expression 'score' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.;
请指定如何查询数据
在Spark SQL中,当groupby子句中不存在column_name时,我们必须将其封装在函数first(column_name)或last(column_name)或任何聚合函数中。它将分别从获取的行中获取第一个或最后一个值。
DataFrame summaryQuery=sql.sql("Select first(score) from summary group by sessionId order by updationDate desc);
summaryquery.show();