Kafka Producer在发送了一定数量的消息后停止发送消息



我的生产者在发送了14116条消息后停止发送消息。我已将nofile的编号从默认值调整为1048576。

大约四五分钟后,生产者再次开始发送消息,但随后再次停止发送21880条消息。。。

我在这里很困惑,我不知道问题可能在哪里。。。知道吗,伙计们?

有关更多详细信息,请参阅下面的代码。

import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSink;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import java.util.Date;
import java.sql.Time;
import java.text.SimpleDateFormat;
import java.util.Random;
import java.sql.Timestamp;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.Properties;
import java.util.UUID;
public class KafkaCreateData extends Thread {
public static final String topic = "web_access";
public static String bootstrap_servers = "xxxxxxxxxxxxx:9092";
public static String zookeeper_connect = "xxxxxxxxxxxxx:2181";
public static int msg_sent_count = 0;
public static int userId = 0;
public static void createData() {
Entity entity = new Entity();
Properties props = new Properties();
//EC2(Kafka producer IP here)
props.put("bootstrap.servers", bootstrap_servers);
props.put("zookeeper.connect", zookeeper_connect);
props.put("group.id", "metric-group");
props.put("batch.size", 32768);
props.put("buffer.memory", 67108864);
props.put("send.buffer.bytes", 67108864);
props.put("receive.buffer.bytes", -1);
//            props.put("max.block.ms", 1);
//            props.put("linger.ms", 1);
//            props.put("request.timeout.ms", 1);
//            props.put("delivery.timeout.ms", 5);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); //Key serialization
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); //value serialization
props.put("request.required.acks", "0");
KafkaProducer producer = new KafkaProducer<String, String>(props);
//phone brand
String phoneArray[] = {"iPhone", "htc", "google", "xiaomi", "huawei"};
//os
String onlineArray[] = {"y", "n"};
//city
String cityArray[] = {"Taipei","Hong Kong","London","Paris","Tokyo","New York","Singapore","Rome"};
//Generate Brand dandomly
int k = (int) (Math.random() * 5);
String phoneName = phoneArray[k];
//Generate os randomly
int m = (int) (Math.random() * 2);
String online = onlineArray[m];
//Generate City randomly
int n = (int) (Math.random() * 8);
String city = cityArray[n];
//Event Time Stamp
SimpleDateFormat sf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.sss");
Date date = new Date();
String loginTime = sf.format(new Timestamp(date.getTime()));
//            String user_id = UUID.randomUUID().toString();
//Loading Data into Entity
entity.setCity(city);
entity.setLoginTime(loginTime);
entity.setOnline(online);
entity.setPhoneName(phoneName);
userId = userId + 1;
entity.setuserId(userId);
ProducerRecord record = new ProducerRecord<String, String>(topic,JSON.toJSONString(entity));
producer.send(record);
System.out.println("sending message:"+ JSON.toJSONString(entity));
msg_sent_count = msg_sent_count + 1;
System.out.println("msg_sent_count: " + msg_sent_count);
}
public static void flink_streaming_job() throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties props = new Properties();
props.put("bootstrap.servers", bootstrap_servers);
props.put("zookeeper.connect", zookeeper_connect);
props.put("group.id", "metric-group");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("auto.offset.reset", "latest");
System.out.println("Before addSource");
env.addSource(
new FlinkKafkaConsumer011<>(
topic, new SimpleStringSchema(), props
)
//                        .setStartFromLatest()
)
//                .setParallelism(9)
.map(string -> JSON.parseObject(string, Entity.class))
.addSink(new MysqlSink());
System.out.println("before execute");
env.execute("Flink add sink");
System.out.println("start to execute");
}
@Override
public void run() {
try {
createTheData();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public static void createTheData() throws InterruptedException {
while (true) {
createData();
Thread.sleep(1); // if this setup as 1, broker will be broken, setup as 500 to ensure it can works
}
}
public static void main(String[] args) throws Exception {
KafkaCreateData ConsumingMsgFromKafkaProducer = new KafkaCreateData();
ConsumingMsgFromKafkaProducer.start();
createData();
//        Flink job on EMR
//        flink_streaming_job();
}
}```

[1]: https://i.stack.imgur.com/7qVTW.png
[2]: https://i.stack.imgur.com/F59am.png

对不起,我还不能发表评论,你能尝试以下方法吗:

  1. 只创建一个生产者:目前,每次发送消息时都会创建一个生产商,这不是最佳实践,而KafkaProducer是线程安全的,因此您的程序只需要一个生产者实例来获得给定的密钥值和代理

相关内容

  • 没有找到相关文章

最新更新