我正在使用avro生成类。这是我的代码在生产者看起来像
TweetInfo tweetInfo = TweetInfo.newBuilder()
.setTweetId(status.getId())
.setTweetCreatedAt(status.getCreatedAt().toString())
.setTweetMessage(status.getText())
.setUserId(user.getId())
.setUserCreatedAt(user.getCreatedAt().toString())
.setUserName(user.getName())
.setUserScreenName(user.getScreenName())
.build();
ProducerRecord<String, TweetInfo> data = new ProducerRecord(KafkaConstants.TOPIC, tweetInfo);
producer.send(data);
tweetinfo是由avro模式生成的类。当我运行这个程序时,我看到一个stacktrace,如下所示
2018-12-11 01:51:58.138 WARN 16244 --- [c Dispatcher[0]] o.i.service.kafka.TweetKafkaProducer : exception Error serializing Avro message
2018-12-11 01:51:59.162 ERROR 16244 --- [c Dispatcher[0]] i.c.k.s.client.rest.RestService : Failed to send HTTP request to endpoint: http://localhost:8081/subjects/twitterData-value/versions
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method) ~[na:1.8.0_152]
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85) ~[na:1.8.0_152]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_152]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[na:1.8.0_152]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_152]
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172) ~[na:1.8.0_152]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_152]
at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_152]
at sun.net.NetworkClient.doConnect(NetworkClient.java:175) ~[na:1.8.0_152]
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) ~[na:1.8.0_152]
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) ~[na:1.8.0_152]
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242) ~[na:1.8.0_152]
at sun.net.www.http.HttpClient.New(HttpClient.java:339) ~[na:1.8.0_152]
at sun.net.www.http.HttpClient.New(HttpClient.java:357) ~[na:1.8.0_152]
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220) ~[na:1.8.0_152]
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156) ~[na:1.8.0_152]
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050) ~[na:1.8.0_152]
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984) ~[na:1.8.0_152]
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1334) ~[na:1.8.0_152]
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1309) ~[na:1.8.0_152]
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:178) [kafka-schema-registry-client-5.0.1.jar:na]
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:235) [kafka-schema-registry-client-5.0.1.jar:na]
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:326) [kafka-schema-registry-client-5.0.1.jar:na]
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:318) [kafka-schema-registry-client-5.0.1.jar:na]
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:313) [kafka-schema-registry-client-5.0.1.jar:na]
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:114) [kafka-schema-registry-client-5.0.1.jar:na]
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:153) [kafka-schema-registry-client-5.0.1.jar:na]
at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:79) [kafka-avro-serializer-5.0.1.jar:na]
at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:53) [kafka-avro-serializer-5.0.1.jar:na]
at org.apache.kafka.common.serialization.Serializer.serialize(Serializer.java:60) [kafka-clients-2.1.0.jar:na]
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:879) [kafka-clients-2.1.0.jar:na]
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:841) [kafka-clients-2.1.0.jar:na]
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:728) [kafka-clients-2.1.0.jar:na]
at org.interview.service.kafka.TweetKafkaProducer$1.onStatus(TweetKafkaProducer.java:95) [classes/:na]
at twitter4j.StatusStreamImpl.onStatus(StatusStreamImpl.java:75) [twitter4j-stream-4.0.6.jar:4.0.6]
at twitter4j.StatusStreamBase$1.run(StatusStreamBase.java:105) [twitter4j-stream-4.0.6.jar:4.0.6]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_152]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_152]
我有Zookeeper和Kafka。我还需要运行模式注册表吗?如果是的话,那么有没有这样做的指南?我找不到。
2条答案
按热度按时间czfnxgou1#
无法将http请求发送到终结点
合流架构注册表服务器需要运行。您可能希望自己尝试访问http端点(请参阅下面的文档)。
不知道你是怎么开始的,但是你可以下载confluent oss,在某个地方提取它,然后在终端中,你会想导航到提取文件夹的
bin
位置和运行confluent start schema-registry
. 注意:这只适用于linux。或者,如果需要“生产部署”配置,则需要编辑
etc
首先将文件夹放在那里,然后使用各自的脚本运行zookeeper、kafka和registry。文档:运行架构注册表
关于评论
当我尝试运行本文中的命令时,它会给出一个错误,即bin不是有效的命令
$ bin/...
首先假设你有cd
他走进了房间confluent-x.x.x
已提取的文件夹顺便说一下,现有的kafkaconnect项目与twitterapi交互。
vsikbqxv2#
就像@cricket\u007说的,如果你在windows上,试着使用docker。
下面的链接是一个docker compose,它将运行kafka、zookeeper、schema registry和kafka rest,您可以轻松地测试您的生产者。https://github.com/confluentinc/docker-images/blob/master/examples/fullstack/docker-compose.yml
编辑:对不起我的不好,这是旧回购的链接,检查下面的一个,你有所有合流平台(你可以删除你不需要的)!
https://github.com/confluentinc/cp-docker-images/blob/5.0.1-post/examples/cp-all-in-one/docker-compose.yml