汇合hdfs连接器正在丢失消息

w9apscun  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(556)

社区,你能帮我理解为什么我的信息中有3%没有被 HDFS ? 我写了一个简单的制片人 JAVA 产生一千万条信息。

public static final String TEST_SCHEMA = "{"
        + "\"type\":\"record\","
        + "\"name\":\"myrecord\","
        + "\"fields\":["
        + "  { \"name\":\"str1\", \"type\":\"string\" },"
        + "  { \"name\":\"str2\", \"type\":\"string\" },"
        + "  { \"name\":\"int1\", \"type\":\"int\" }"
        + "]}";

public KafkaProducerWrapper(String topic) throws UnknownHostException {
    // store topic name
    this.topic = topic;

    // initialize kafka producer
    Properties config = new Properties();
    config.put("client.id", InetAddress.getLocalHost().getHostName());
    config.put("bootstrap.servers", "myserver-1:9092");
    config.put("key.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer");
    config.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer");
    config.put("schema.registry.url", "http://myserver-1:8089");
    config.put("acks", "all");

    producer = new KafkaProducer(config);

    // parse schema
    Schema.Parser parser = new Schema.Parser();
    schema = parser.parse(TEST_SCHEMA);
}

public void send() {
    // generate key
    int key = (int) (Math.random() * 20);

    // generate record
    GenericData.Record r = new GenericData.Record(schema);
    r.put("str1", "text" + key);
    r.put("str2", "text2" + key);
    r.put("int1", key);

    final ProducerRecord<String, GenericRecord> record = new ProducerRecord<>(topic, "K" + key, (GenericRecord) r);
    producer.send(record, new Callback() {
        public void onCompletion(RecordMetadata metadata, Exception e) {
            if (e != null) {
                logger.error("Send failed for record {}", record, e);
                messageErrorCounter++;
                return;
            }
            logger.debug("Send succeeded for record {}", record);
            messageCounter++;
        }
    });
}

public String getStats() { return "Messages sent: " + messageCounter + "/" + messageErrorCounter; }

public long getMessageCounter() {
    return messageCounter + messageErrorCounter;
}

public void close() {
    producer.close();
}

public static void main(String[] args) throws InterruptedException, UnknownHostException {
    // initialize kafka producer
    KafkaProducerWrapper kafkaProducerWrapper = new KafkaProducerWrapper("my-test-topic");

    long max = 10000000L;
    for (long i = 0; i < max; i++) {
        kafkaProducerWrapper.send();
    }

    logger.info("producer-demo sent all messages");
    while (kafkaProducerWrapper.getMessageCounter() < max)
    {
        logger.info(kafkaProducerWrapper.getStats());
        Thread.sleep(2000);
    }

    logger.info(kafkaProducerWrapper.getStats());
    kafkaProducerWrapper.close();
}

我使用 Confluent HDFS Connector 以独立模式将数据写入 HDFS . 配置如下:

name=hdfs-consumer-test
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=1

topics=my-test-topic

hdfs.url=hdfs://my-cluster/kafka-test
hadoop.conf.dir=/etc/hadoop/conf/
flush.size=100000
rotate.interval.ms=20000

# increase timeouts to avoid CommitFailedException

consumer.session.timeout.ms=300000
consumer.request.timeout.ms=310000

heartbeat.interval.ms= 60000
session.timeout.ms= 100000

连接器将数据写入hdfs,但在等待20000毫秒后(由于 rotate.interval.ms )并不是所有的消息都能收到。

scala> spark.read.avro("/kafka-test/topics/my-test-topic/partition=*/my-test-topic*")
  .count()
res0: Long = 9749015

你知道这种行为的原因是什么吗?我的错在哪里?我正在使用confluent 3.0.1/kafka 10.0.0.1。

t3psigkw

t3psigkw1#

您是否看到最后几条消息没有移动到hdfs?如果是这样的话,很可能您遇到了这里描述的问题https://github.com/confluentinc/kafka-connect-hdfs/pull/100
尝试在rotate.interval.ms过期后再向主题发送一条消息,以验证这是您遇到的问题。如果您需要根据时间进行轮换,升级以获取修复可能是个好主意。

相关问题