卡夫卡制片人挂在发送上



逻辑是,从自定义源获取数据的流作业必须同时写入Kafka和HDFS。

我写了一个(非常)基本的 Kafka 制作器来做到这一点,但是整个流式处理作业都挂在 send 方法上。

class KafkaProducer(val kafkaBootstrapServers: String, val kafkaTopic: String, val sslCertificatePath: String, val sslCertificatePassword: String) {
  val kafkaProps: Properties = new Properties()
  kafkaProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBootstrapServers)
  kafkaProps.put("acks", "1")
  kafkaProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
  kafkaProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
  kafkaProps.put("ssl.truststore.location", sslCertificatePath)
  kafkaProps.put("ssl.truststore.password", sslCertificatePassword)
  val kafkaProducer: KafkaProducer[Long, Array[String]] = new KafkaProducer(kafkaProps)
  def sendKafkaMessage(message: Message): Unit = {
    message.data.foreach(list => {
      val producerRecord: ProducerRecord[Long, Array[String]] = new ProducerRecord[Long, Array[String]](kafkaTopic, message.timeStamp.getTime, list.toArray)
      kafkaProducer.send(producerRecord)
    })
  }
}

以及调用生产者的代码:

receiverStream.foreachRDD(rdd => {
      val messageRowRDD: RDD[Row] = rdd.mapPartitions(partition => {
        val parser: Parser = new Parser
        val kafkaProducer: KafkaProducer = new KafkaProducer(kafkaBootstrapServers, kafkaTopic, kafkaSslCertificatePath, kafkaSslCertificatePass)
        val newPartition = partition.map(message => {
          Logger.getLogger("importer").error("Writing Message to Kafka...")
          kafkaProducer.sendKafkaMessage(message)
          Logger.getLogger("importer").error("Finished writing Message to Kafka")
          Message.data.map(singleMessage => parser.parseMessage(Message.timeStamp.getTime, singleMessage))
        })
        newPartition.flatten
      })
      val df = sqlContext.createDataFrame(messageRowRDD, Schema.messageSchema)
      Logger.getLogger("importer").info("Entries-count: " + df.count())
      val row = Try(df.first)
      row match {
        case Success(s) => Persister.writeDataframeToDisk(df, outputFolder)
        case Failure(e) => Logger.getLogger("importer").warn("Resulting DataFrame is empty. Nothing can be written")
      }
    })

从日志中,我可以判断每个执行器都达到了"发送到 kafka"点,但没有进一步。所有执行程序都挂在上面,不会抛出任何异常。

Message 类是一个非常简单的事例类,具有 2 个字段、一个时间戳和一个字符串数组。

这是由于Kafka中的acks字段。

Acks 设置为 1,发送速度更快。

最新更新