我正在使用Spark中的以下代码将我的HBase/Phoenix表的指定列加载到Spark DataFrame中。我可以指定要加载的列,但是我可以指定哪个行?还是我必须加载所有行?
import org.apache.hadoop.conf.Configuration
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.phoenix.spark._
sc.stop()
val sc = new SparkContext("local", "phoenix-test")
val df = sqlContext.phoenixTableAsDataFrame(
"TABLENAME", Array("ROWKEY", "CF.COL1","CF.COL2","CF.COL3"), conf = configuration
)
您可以在调用中添加一个谓词以限制哪个行被检索,例如,
val df = sqlContext.phoenixTableAsDataFrame(
"TABLENAME", Array("ROWKEY", "CF.COL1","CF.COL2","CF.COL3"),
conf = configuration,
predicate = Some("ROWKEY IN ('1', '2')")
)