如何连接apachekafka和amazons3?

gk7wooem  于 2021-06-06  发布在  Kafka
关注(0)|答案(2)|浏览(373)

我想使用kafka connect将kafka的数据存储到bucket s3中。我已经有一个Kafka的主题运行,我有一个桶s3创建。我的主题有关于protobuffer的数据,我试过了https://github.com/qubole/streamx 我得到了下一个错误:

[2018-10-04 13:35:46,512] INFO Revoking previously assigned partitions [] for group connect-s3-sink (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:280)
 [2018-10-04 13:35:46,512] INFO (Re-)joining group connect-s3-sink (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:326)
 [2018-10-04 13:35:46,645] INFO Successfully joined group connect-s3-sink with generation 1 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:434)
 [2018-10-04 13:35:46,692] INFO Setting newly assigned partitions [ssp.impressions-11, ssp.impressions-10, ssp.impressions-7, ssp.impressions-6, ssp.impressions-9, ssp.impressions-8, ssp.impressions-3, ssp.impressions-2, ssp.impressions-5, ssp.impressions-4, ssp.impressions-1, ssp.impressions-0] for Group connect-s3-sink(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:219)
 [2018-10-04 13:35:47,193] ERROR Task s3-sink-0 threw an uncaught an unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:142)
 java.lang.NullPointerException
    at io.confluent.connect.hdfs.HdfsSinkTask.close(HdfsSinkTask.java:122)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:290)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:421)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:146)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2018-10-04 13:35:47,194] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:143)
[2018-10-04 13:35:51,235] INFO Reflections took 6844 ms to scan 259 urls, producing 13517 keys and 95788 values (org.reflections.Reflections:229)

我做了下一步:
我克隆了存储库。
mvn DskipTests package nano config/connect-standalone.propertiesbootstrap.servers=ip-myip.ec2.internal:9092 key.converter=com.qubole.streamx.ByteArrayConverter value.converter=com.qubole.streamx.ByteArrayConverternano config/quickstart-s3.propertiesname=s3-sink connector.class=com.qubole.streamx.s3.S3SinkConnector format.class=com.qubole.streamx.SourceFormat tasks.max=1 topics=ssp.impressions flush.size=3 s3.url=s3://myaccess_key:mysecret_key@mybucket/democonnect-standalone /etc/kafka/connect-standalone.properties quickstart-s3.properties 我想知道我这样做是好的还是另一种方法,让数据从Kafka进入s3。

w8biq8rn

w8biq8rn1#

另一种方法是使用日志循环编写使用者,然后将corn文件写入s3。

yqlxgs2m

yqlxgs2m2#

您可以使用kafka connect与kafka connect s3连接器进行此集成。
kafkaconnect是apachekafka的一部分,s3连接器是一个开源连接器,既可以独立使用,也可以作为confluent平台的一部分使用。
有关kafka connect的一般信息和示例,本系列文章可能会有所帮助:
https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-1/
https://www.confluent.io/blog/blogthe-simplest-useful-kafka-connect-data-pipeline-in-the-world-or-thereabouts-part-2/
https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-3/
免责声明:我为confluent工作,并撰写了上述博客文章。
2020年4月:我录制了一段视频,演示如何使用s3Flume:https://rmoff.dev/kafka-s3-video

相关问题