Spark SQL中的dataFrame.Select()和dataframe.todf()之间有什么不同



似乎他们都返回一个新的dataframe

源代码:

def toDF(self, *cols):
    jdf = self._jdf.toDF(self._jseq(cols))
    return DataFrame(jdf, self.sql_ctx)

def select(self, *cols):
    jdf = self._jdf.select(self._jcols(*cols))
    return DataFrame(jdf, self.sql_ctx)

差异很微妙。

例如,如果您使用.toDF("name", "age")将未命名的元组("Pete", 22)转换为数据框,并且还可以通过再次调用toDF方法来重命名数据框。例如:

scala> val rdd = sc.parallelize(List(("Piter", 22), ("Gurbe", 27)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[2] at parallelize at <console>:27
scala> val df = rdd.toDF("name", "age")
df: org.apache.spark.sql.DataFrame = [name: string, age: int]
scala> df.show()
+-----+---+
| name|age|
+-----+---+
|Piter| 22|
|Gurbe| 27|
+-----+---+
scala> val df = rdd.toDF("person", "age")
df: org.apache.spark.sql.DataFrame = [person: string, age: int]
scala> df.show()
+------+---+
|person|age|
+------+---+
| Piter| 22|
| Gurbe| 27|
+------+---+

使用"选择"可以选择列,该列可以稍后用于投影表,或仅保存所需的列:

scala> df.select("age").show()
+---+
|age|
+---+
| 22|
| 27|
+---+
scala> df.select("age").write.save("/tmp/ages.parquet")
Scaling row group sizes to 88.37% for 8 writers.

希望这会有所帮助!

相关内容

  • 没有找到相关文章

最新更新