用ApacheKafka的java流连接示例?

ogsagwnx  于 2021-06-06  发布在  Kafka
关注(0)|答案(3)|浏览(343)

我正在寻找一个使用kafka streams的例子来说明如何做这类事情,即将customers表与addresses表连接起来,并将数据接收到es:-

客户

+------+------------+----------------+-----------------------+
| id   | first_name | last_name      | email                 |
+------+------------+----------------+-----------------------+
| 1001 | Sally      | Thomas         | sally.thomas@acme.com |
| 1002 | George     | Bailey         | gbailey@foobar.com    |
| 1003 | Edward     | Davidson       | ed@walker.com         |
| 1004 | Anne       | Kim            | annek@noanswer.org    |
+------+------------+----------------+-----------------------+

地址

+----+-------------+---------------------------+------------+--------------+-------+----------+
| id | customer_id | street                    | city       | state        | zip   | type     |
+----+-------------+---------------------------+------------+--------------+-------+----------+
| 10 |        1001 | 3183 Moore Avenue         | Euless     | Texas        | 76036 | SHIPPING |
| 11 |        1001 | 2389 Hidden Valley Road   | Harrisburg | Pennsylvania | 17116 | BILLING  |
| 12 |        1002 | 281 Riverside Drive       | Augusta    | Georgia      | 30901 | BILLING  |
| 13 |        1003 | 3787 Brownton Road        | Columbus   | Mississippi  | 39701 | SHIPPING |
| 14 |        1003 | 2458 Lost Creek Road      | Bethlehem  | Pennsylvania | 18018 | SHIPPING |
| 15 |        1003 | 4800 Simpson Square       | Hillsdale  | Oklahoma     | 73743 | BILLING  |
| 16 |        1004 | 1289 University Hill Road | Canehill   | Arkansas     | 72717 | LIVING   |
+----+-------------+---------------------------+------------+--------------+-------+----------+

输出elasticsearch索引

"hits": [
  {
    "_index": "customers_with_addresses",
    "_type": "_doc",
    "_id": "1",
    "_score": 1.3278645,
    "_source": {
      "first_name": "Sally",
      "last_name": "Thomas",
      "email": "sally.thomas@acme.com",
      "addresses": [{
        "street": "3183 Moore Avenue",
        "city": "Euless",
        "state": "Texas",
        "zip": "76036",
        "type": "SHIPPING"
      }, {
        "street": "2389 Hidden Valley Road",
        "city": "Harrisburg",
        "state": "Pennsylvania",
        "zip": "17116",
        "type": "BILLING"
      }],
    }
  }, ….

表数据来自debezium主题,我是否正确地认为我需要一些java在中间连接流,将其输出到一个新主题,然后将其放入es?
有没有人有这样的代码?
谢谢。

ktecyv1j

ktecyv1j1#

不久前,我们在debezium博客上构建了一个关于这个用例的演示和博客文章(流聚合到elasticsearch)。
需要记住的一个问题是,这个解决方案(基于kafka流,但我认为对于ksql也是如此)容易暴露中间连接结果。e、 g.假设您在一个事务中插入一个客户和10个地址。流连接方法可能首先生成客户及其前5个地址的聚合,然后不久生成包含所有10个地址的完整聚合。对于您的特定用例,这可能是或者可能不是所需要的。我还记得,处理删除并不是一件小事(例如,如果您删除了10个地址中的一个,那么您必须再次生成聚合,其中剩下的9个地址可能未被触及)。
另一种可以考虑的模式是发件箱模式,在发件箱模式中,基本上生成一个显式事件,其中预计算的事件从应用程序本身聚合而来。i、 它需要应用程序的一点帮助,但是它避免了事后产生连接结果的微妙之处。

x0fgdtte

x0fgdtte2#

根据您对在一个客户节点中嵌套多个地址的要求有多严格,您可以在ksql(构建在kafka流之上)中实现这一点。
将一些测试数据填充到kafka中(在您的情况下,这已经通过debezium完成了):

$ curl -s "https://api.mockaroo.com/api/ffa9ff20?count=10&key=ff7856d0" | kafkacat -b localhost:9092 -t addresses -P

$ curl -s "https://api.mockaroo.com/api/9b868890?count=4&key=ff7856d0" | kafkacat -b localhost:9092 -t customers -P

启动ksql,首先检查数据:

ksql> PRINT 'addresses' FROM BEGINNING ;
Format:JSON
{"ROWTIME":1558519823351,"ROWKEY":"null","id":1,"customer_id":1004,"street":"8 Moulton Center","city":"Bronx","state":"New York","zip":"10474","type":"BILLING"}
{"ROWTIME":1558519823351,"ROWKEY":"null","id":2,"customer_id":1001,"street":"5 Hollow Ridge Alley","city":"Washington","state":"District of Columbia","zip":"20016","type":"LIVING"}
{"ROWTIME":1558519823351,"ROWKEY":"null","id":3,"customer_id":1000,"street":"58 Maryland Point","city":"Greensboro","state":"North Carolina","zip":"27404","type":"LIVING"}
{"ROWTIME":1558519823351,"ROWKEY":"null","id":4,"customer_id":1002,"street":"55795 Derek Avenue","city":"Temple","state":"Texas","zip":"76505","type":"LIVING"}
{"ROWTIME":1558519823351,"ROWKEY":"null","id":5,"customer_id":1002,"street":"164 Continental Plaza","city":"Modesto","state":"California","zip":"95354","type":"SHIPPING"}
{"ROWTIME":1558519823351,"ROWKEY":"null","id":6,"customer_id":1004,"street":"6 Miller Road","city":"Louisville","state":"Kentucky","zip":"40205","type":"BILLING"}
{"ROWTIME":1558519823351,"ROWKEY":"null","id":7,"customer_id":1003,"street":"97 Shasta Place","city":"Pittsburgh","state":"Pennsylvania","zip":"15286","type":"BILLING"}
{"ROWTIME":1558519823351,"ROWKEY":"null","id":8,"customer_id":1000,"street":"36 Warbler Circle","city":"Memphis","state":"Tennessee","zip":"38109","type":"SHIPPING"}
{"ROWTIME":1558519823351,"ROWKEY":"null","id":9,"customer_id":1001,"street":"890 Eagan Circle","city":"Saint Paul","state":"Minnesota","zip":"55103","type":"SHIPPING"}
{"ROWTIME":1558519823354,"ROWKEY":"null","id":10,"customer_id":1000,"street":"8 Judy Terrace","city":"Washington","state":"District of Columbia","zip":"20456","type":"SHIPPING"}
^C
Topic printing ceased

ksql>
ksql> PRINT 'customers' FROM BEGINNING;
Format:JSON
{"ROWTIME":1558519852363,"ROWKEY":"null","id":1001,"first_name":"Jolee","last_name":"Handasyde","email":"jhandasyde0@nhs.uk"}
{"ROWTIME":1558519852363,"ROWKEY":"null","id":1002,"first_name":"Rebeca","last_name":"Kerrod","email":"rkerrod1@sourceforge.net"}
{"ROWTIME":1558519852363,"ROWKEY":"null","id":1003,"first_name":"Bobette","last_name":"Brumble","email":"bbrumble2@cdc.gov"}
{"ROWTIME":1558519852368,"ROWKEY":"null","id":1004,"first_name":"Royal","last_name":"De Biaggi","email":"rdebiaggi3@opera.com"}

现在我们宣布 STREAM (Kafka主题+模式),以便我们可以进一步操作它:

ksql> CREATE STREAM addresses_RAW (ID INT, CUSTOMER_ID INT, STREET VARCHAR, CITY VARCHAR, STATE VARCHAR, ZIP VARCHAR, TYPE VARCHAR) WITH (KAFKA_TOPIC='addresses', VALUE_FORMAT='JSON');

 Message
----------------
 Stream created
----------------

ksql> CREATE STREAM customers_RAW (ID INT, FIRST_NAME VARCHAR, LAST_NAME VARCHAR, EMAIL VARCHAR) WITH (KAFKA_TOPIC='customers', VALUE_FORMAT='JSON');

 Message
----------------
 Stream created
----------------

我们要模拟 customers 作为一个 TABLE ,要做到这一点,kafka消息需要正确设置密钥(从 "ROWKEY":"null"PRINT 以上输出)。您可以将debezium配置为设置消息密钥,以便在ksql中不必执行此步骤:

ksql> CREATE STREAM CUSTOMERS_KEYED WITH (PARTITIONS=1) AS SELECT * FROM CUSTOMERS_RAW PARTITION BY ID;

 Message
----------------------------
 Stream created and running
----------------------------

现在我们宣布 TABLE (给定键的状态,从kafka主题+模式示例化):

ksql> CREATE TABLE CUSTOMER (ID INT, FIRST_NAME VARCHAR, LAST_NAME VARCHAR, EMAIL VARCHAR) WITH (KAFKA_TOPIC='CUSTOMERS_KEYED', VALUE_FORMAT='JSON', KEY='ID');

 Message
---------------
 Table created
---------------

现在我们可以加入数据:

ksql> CREATE STREAM customers_with_addresses AS 
      SELECT CUSTOMER_ID, 
             FIRST_NAME + ' ' + LAST_NAME AS FULL_NAME, 
             FIRST_NAME, 
             LAST_NAME, 
             TYPE AS ADDRESS_TYPE, 
             STREET, 
             CITY, 
             STATE, 
             ZIP 
        FROM ADDRESSES_RAW A 
             INNER JOIN CUSTOMER C 
             ON A.CUSTOMER_ID = C.ID;

 Message
----------------------------
 Stream created and running
----------------------------

这将创建一个新的ksql流,该流反过来填充一个新的kafka主题。

ksql> SHOW STREAMS;

 Stream Name                              | Kafka Topic                          | Format
------------------------------------------------------------------------------------------
 CUSTOMERS_KEYED                          | CUSTOMERS_KEYED                      | JSON
 ADDRESSES_RAW                            | addresses                            | JSON
 CUSTOMERS_RAW                            | customers                            | JSON
 CUSTOMERS_WITH_ADDRESSES                 | CUSTOMERS_WITH_ADDRESSES             | JSON

流具有架构:

ksql> DESCRIBE CUSTOMERS_WITH_ADDRESSES;

Name                 : CUSTOMERS_WITH_ADDRESSES
 Field        | Type
------------------------------------------
 ROWTIME      | BIGINT           (system)
 ROWKEY       | VARCHAR(STRING)  (system)
 CUSTOMER_ID  | INTEGER          (key)
 FULL_NAME    | VARCHAR(STRING)
 FIRST_NAME   | VARCHAR(STRING)
 ADDRESS_TYPE | VARCHAR(STRING)
 LAST_NAME    | VARCHAR(STRING)
 STREET       | VARCHAR(STRING)
 CITY         | VARCHAR(STRING)
 STATE        | VARCHAR(STRING)
 ZIP          | VARCHAR(STRING)
------------------------------------------
For runtime statistics and query details run: DESCRIBE EXTENDED <Stream,Table>;

我们可以查询流:

ksql> SELECT * FROM CUSTOMERS_WITH_ADDRESSES WHERE CUSTOMER_ID=1002;
1558519823351 | 1002 | 1002 | Rebeca Kerrod | Rebeca | LIVING | Kerrod | 55795 Derek Avenue | Temple | Texas | 76505
1558519823351 | 1002 | 1002 | Rebeca Kerrod | Rebeca | SHIPPING | Kerrod | 164 Continental Plaza | Modesto | California | 95354

我们还可以使用kafka connect将其流式传输到elasticsearch:

curl -i -X POST -H "Accept:application/json" \
    -H  "Content-Type:application/json" http://localhost:8083/connectors/ \
    -d '{
      "name": "sink-elastic-customers_with_addresses-00",
      "config": {
        "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
        "topics": "CUSTOMERS_WITH_ADDRESSES",
        "connection.url": "http://elasticsearch:9200",
        "type.name": "type.name=kafkaconnect",
        "key.ignore": "true",
        "schema.ignore": "true",
        "key.converter": "org.apache.kafka.connect.storage.StringConverter",
        "value.converter": "org.apache.kafka.connect.json.JsonConverter",
        "value.converter.schemas.enable": "false"
      }
    }'

结果:

$ curl -s http://localhost:9200/customers_with_addresses/_search | jq '.hits.hits[0]'
{
  "_index": "customers_with_addresses",
  "_type": "type.name=kafkaconnect",
  "_id": "CUSTOMERS_WITH_ADDRESSES+0+2",
  "_score": 1,
  "_source": {
    "ZIP": "76505",
    "CITY": "Temple",
    "ADDRESS_TYPE": "LIVING",
    "CUSTOMER_ID": 1002,
    "FULL_NAME": "Rebeca Kerrod",
    "STATE": "Texas",
    "STREET": "55795 Derek Avenue",
    "LAST_NAME": "Kerrod",
    "FIRST_NAME": "Rebeca"
  }
}
lokaqttq

lokaqttq3#

是的,您可以通过以下方式在java中使用kafka streams api实现该解决方案。
将主题作为流使用。
使用customer id聚合列表中的地址流,并将其转换为表。
使用地址表连接客户流
以下是示例(考虑到数据是以json格式使用的):

KStream<String,JsonNode> customers = builder.stream("customer", Consumed.with(stringSerde, jsonNodeSerde));
KStream<String,JsonNode> addresses = builder.stream("address", Consumed.with(stringSerde, jsonNodeSerde));

// Select the customer ID as key in order to join with address. 
KStream<String,JsonNode> customerRekeyed = customers.selectKey(value-> value.get("id").asText());

ObjectMapper mapper = new ObjectMapper();    
// Select Customer_id as key to aggregate the addresses and join with customer
KTable<String,JsonNode> addressTable = addresses
        .selectKey(value-> value.get("customer_id").asText())
        .groupByKey()
        .aggregate(() ->mapper::createObjectNode,  //initializer
                   (key,value,aggregate) -> aggregate.add(value),
                 Materialized.with(stringSerde, jsonNodeSerde)
         );  //adder

// Join Customer Stream with Address Table
KStream<String,JsonNode> customerAddressStream = customerRekeyed.leftJoin(addressTable,
               (left,right) -> {
                      ObjectNode finalNode = mapper.createObjectNode();
                      ArrayList addressList = new ArrayList<JsonNode>();
                      // Considering the address is arrayNode
                      ((ArrayNode)right).elements().forEachRemaining(addressList ::add);
                      left.putArray("addresses").allAll(addressList);              
                      return left;
               },Joined.keySerde(stringSerde).withValueSerde(jsonNodeSerde));

您可以在此处查看有关所有类型联接的详细信息:
https://docs.confluent.io/current/streams/developer-guide/dsl-api.html#joining

相关问题