Apache Spark数据集.foreach与Aerospike客户端



我想通过Apache Spark从Apache Hive中检索行,并将每一行放入Aerospike缓存。

这是一个简单的案例。

var dataset = session.sql("select * from employee");
final var aerospikeClient = aerospike;  // to remove binding between lambda and the service class itself
dataset.foreach(row -> {
var key = new Key("namespace", "set", randomUUID().toString());
aerospikeClient.add(
key,
new Bin(
"json-repr",
row.json()
)
);
});

我得到一个错误:

Caused by: java.io.NotSerializableException: com.aerospike.client.reactor.AerospikeReactorClient

显然,我无法使AerospikeReactorClient可序列化。我尝试添加dataset.collectAsList(),结果成功了。但据了解,这种方法将所有内容加载到一个节点中。可能有大量的数据。所以,这不是一种选择。

处理这些问题的最佳做法是什么?

您可以直接从数据帧进行写入。无需在数据集中循环。

启动spark shell并导入com.Aeropike.spark.sql._包:

$ spark-shell
scala> import com.aerospike.spark.sql._
import com.aerospike.spark.sql._

将数据写入Aerospike 的示例

val TEST_COUNT= 100
val simpleSchema: StructType = new StructType(
Array(
StructField("one", IntegerType, nullable = false),
StructField("two", StringType, nullable = false),
StructField("three", DoubleType, nullable = false)
))
val simpleDF = {
val inputBuf=  new ArrayBuffer[Row]()
for ( i <- 1 to num_records){
val one = i
val two = "two:"+i
val three = i.toDouble
val r = Row(one, two, three)
inputBuf.append(r)
}
val inputRDD = spark.sparkContext.parallelize(inputBuf.toSeq)
spark.createDataFrame(inputRDD,simpleSchema)
}
//Write the Sample Data to Aerospike
simpleDF.write
.format("aerospike") //aerospike specific format
.option("aerospike.writeset", "spark-test") //write to this set
.option("aerospike.updateByKey", "one")//indicates which columns should be used for construction of primary key
.option("aerospike.write.mode","update")
.save()

我通过在foreachlambda中手动创建AerospikeClient来解决这个问题。

var dataset = session.sql("select * from employee");
dataset.foreach(row -> {
var key = new Key("namespace", "set", randomUUID().toString());
newAerospikeClient(aerospikeProperties).add(
key,
new Bin(
"json-repr",
row.json()
)
);
});

现在我只需要将AerospikeProperties声明为Serializable

最新更新