我正在使用pyspark [spark2.3.1]和hbase1.2.1,我想知道使用pyspark访问HBase的最佳方法是什么?
我进行了一些初始级别的搜索级别,发现几乎没有使用SHC核心的选项:1.1.1.1-2.1-s_2.11.jar,这可以实现,但是无论我尝试寻找一些示例,在大多数地方,代码都以Scala编写,示例也基于Scala。我尝试在pyspark中实现基本代码:
from pyspark import SparkContext
from pyspark.sql import SQLContext
def main():
sc = SparkContext()
sqlc = SQLContext(sc)
data_source_format = 'org.apache.spark.sql.execution.datasources.hbase'
catalog = ''.join("""{
"table":{"namespace":"default", "name":"firsttable"},
"rowkey":"key",
"columns":{
"firstcol":{"cf":"rowkey", "col":"key", "type":"string"},
"secondcol":{"cf":"d", "col":"colname", "type":"string"}
}
}""".split())
df = sqlc.read.options(catalog=catalog).format(data_source_format).load()
df.select("secondcol").show()
# entry point for PySpark application
if __name__ == '__main__':
main()
并使用:
运行它spark-submit --master yarn-client --files /opt/hbase-1.1.2/conf/hbase-site.xml --packages com.hortonworks:shc-core:1.1.1-2.1-s_2.11 --jars /home/ubuntu/hbase-spark-2.0.0-alpha4.jar HbaseMain2.py
它正在让我返回空白输出:
+---------+
|secondcol|
+---------+
+---------+
我不确定我在做什么错?也不确定最好的方法是什么?
任何参考都将不胜感激。
问候
最后,使用 SHC ,我可以使用Pyspark代码连接到HBASE-1.2.1。以下是我的工作:
-
我所有的Hadoop [Namenode,DataNode,NodeManager,ResourceManager]&HBASE [HMASTER,HREGIONSERVER,HQUORUMPER] Deamons在我的EC2实例上启动并运行。
-
我将emp.csv文件放置在HDFS位置/test/emp.csv,带有数据:
key,empId,empName,empWeight 1,"E007","Bhupesh",115.10 2,"E008","Chauhan",110.23 3,"E009","Prithvi",90.0 4,"E0010","Raj",80.0 5,"E0011","Chauhan",100.0
-
i创建 readwritehbase.py 文件,具有以下代码[用于从HDF读取emp.csv文件的行,然后在HBase中首先创建tblemloyee,将数据推入tblem opplionee,然后再次读取读取来自同一表的某些数据并在控制台上显示它]:
from pyspark.sql import SparkSession def main(): spark = SparkSession.builder.master("yarn-client").appName("HelloSpark").getOrCreate() dataSourceFormat = "org.apache.spark.sql.execution.datasources.hbase" writeCatalog = ''.join("""{ "table":{"namespace":"default", "name":"tblEmployee", "tableCoder":"PrimitiveType"}, "rowkey":"key", "columns":{ "key":{"cf":"rowkey", "col":"key", "type":"int"}, "empId":{"cf":"personal","col":"empId","type":"string"}, "empName":{"cf":"personal", "col":"empName", "type":"string"}, "empWeight":{"cf":"personal", "col":"empWeight", "type":"double"} } }""".split()) writeDF = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load("/test/emp.csv") print("csv file read", writeDF.show()) writeDF.write.options(catalog=writeCatalog, newtable=5).format(dataSourceFormat).save() print("csv file written to HBase") readCatalog = ''.join("""{ "table":{"namespace":"default", "name":"tblEmployee"}, "rowkey":"key", "columns":{ "key":{"cf":"rowkey", "col":"key", "type":"int"}, "empId":{"cf":"personal","col":"empId","type":"string"}, "empName":{"cf":"personal", "col":"empName", "type":"string"} } }""".split()) print("going to read data from Hbase table") readDF = spark.read.options(catalog=readCatalog).format(dataSourceFormat).load() print("data read from HBase table") readDF.select("empId", "empName").show() readDF.show() # entry point for PySpark application if __name__ == '__main__': main()
-
使用命令在VM控制台上运行此脚本:
spark-submit --master yarn-client --packages com.hortonworks:shc-core:1.1.1-2.1-s_2.11 --repositories http://nexus-private.hortonworks.com/nexus/content/repositories/IN-QA/ readwriteHBase.py
-
中间结果:阅读CSV文件后:
+---+-----+-------+---------+ |key|empId|empName|empWeight| +---+-----+-------+---------+ | 1| E007|Bhupesh| 115.1| | 2| E008|Chauhan| 110.23| | 3| E009|Prithvi| 90.0| | 4|E0010| Raj| 80.0| | 5|E0011|Chauhan| 100.0| +---+-----+-------+---------+
-
最终输出:从HBase读取数据后:
+-----+-------+ |empId|empName| +-----+-------+ | E007|Bhupesh| | E008|Chauhan| | E009|Prithvi| |E0010| Raj| |E0011|Chauhan| +-----+-------+
Note :创建HBase表并将数据插入HBase表中时,它预计数字跨度应该大于3,因此我在将数据添加到HBase
options(catalog=writeCatalog, newtable=5)