docker中的kafka connect和hdfs

ttcibm8c  于 2021-06-01  发布在  Hadoop
关注(0)|答案(1)|浏览(668)

我正在docker compose中使用kafka connect hdfs接收器和hadoop(用于hdfs)。
hadoop(namenode和datanode)似乎工作正常。
但Kafka连接接收器有个错误:

ERROR Recovery failed at state RECOVERY_PARTITION_PAUSED 
(io.confluent.connect.hdfs.TopicPartitionWriter:277) 
org.apache.kafka.connect.errors.DataException: 
Error creating writer for log file hdfs://namenode:8020/logs/MyTopic/0/log

有关信息:
mydocker-compose.yml中的hadoop服务:

namenode:
  image: uhopper/hadoop-namenode:2.8.1
  hostname: namenode
  container_name: namenode
  ports:
    - "50070:50070"
  networks:
    default:
    fides-webapp:
      aliases:
        - "hadoop"
  volumes:
    - namenode:/hadoop/dfs/name
  env_file:
    - ./hadoop.env
  environment:
    - CLUSTER_NAME=hadoop-cluster

datanode1:
  image: uhopper/hadoop-datanode:2.8.1
  hostname: datanode1
  container_name: datanode1
  networks:
    default:
    fides-webapp:
      aliases:
        - "hadoop"
  volumes:
    - datanode1:/hadoop/dfs/data
  env_file:
    - ./hadoop.env

还有我的Kafka连接文件:

name=hdfs-sink
    connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
    tasks.max=1
    topics=MyTopic
    hdfs.url=hdfs://namenode:8020
    flush.size=3

编辑:
我为kafka connect添加了一个env变量,以了解集群名称(env变量:cluster\ u name,以添加到docker compose文件中的kafka connect服务中)。
错误不一样(似乎解决了一个问题):

INFO Starting commit and rotation for topic partition scoring-topic-0 with start offsets {partition=0=0} and end offsets {partition=0=2} 
 (io.confluent.connect.hdfs.TopicPartitionWriter:368)
ERROR Exception on topic partition MyTopic-0: (io.confluent.connect.hdfs.TopicPartitionWriter:403)
org.apache.kafka.connect.errors.DataException: org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
File /topics/+tmp/MyTopic/partition=0/bc4cf075-ccfa-4338-9672-5462cc6c3404_tmp.avro 
could only be replicated to 0 nodes instead of minReplication (=1).  
There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

编辑2:
这个 hadoop.env 文件是:

CORE_CONF_fs_defaultFS=hdfs://namenode:8020

    # Configure default BlockSize and Replication for local
    # data. Keep it small for experimentation.
    HDFS_CONF_dfs_blocksize=1m

    YARN_CONF_yarn_log___aggregation___enable=true
    YARN_CONF_yarn_resourcemanager_recovery_enabled=true
    YARN_CONF_yarn_resourcemanager_store_class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
    YARN_CONF_yarn_resourcemanager_fs_state___store_uri=/rmstate
    YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs

    YARN_CONF_yarn_log_server_url=http://historyserver:8188/applicationhistory/logs/
    YARN_CONF_yarn_timeline___service_enabled=true
    YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
    YARN_CONF_yarn_resourcemanager_system___metrics___publisher_enabled=true

    YARN_CONF_yarn_resourcemanager_hostname=resourcemanager
    YARN_CONF_yarn_timeline___service_hostname=historyserver
2w3kk1z5

2w3kk1z51#

最后就像@cricket\u 007注意到的一样,我需要配置 hadoop.conf.dir .
目录应该包含 hdfs-site.xml .
当每个服务都停靠时,我需要创建一个命名卷,以便在服务器之间共享配置文件 kafka-connect 服务和 namenode 服务。
为此,我将 docker-compose.yml :

volumes:
  hadoopconf:

那么对于 namenode 我添加的服务:

volumes:
  - hadoopconf:/etc/hadoop

对于Kafka连接服务:

volumes:
    - hadoopconf:/usr/local/hadoop-conf

我终于定下来了 hadoop.conf.dir 在我的hdfs接收器属性文件中 /usr/local/hadoop-conf .

相关问题