无法启动hdfs连接器

iqxoj9l9  于 2021-06-07  发布在  Kafka
关注(0)|答案(1)|浏览(379)

我从下载了Kafka连接http://docs.confluent.io/2.0.0/quickstart.html#quickstart
我正在试着运行hdfs连接器。以下是设置:
connect-standalone.properties:独立连接:

bootstrap.servers=lvpi00658.s:9092,lvpi00659.s:9092,lvpi00660.s:9092

key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter

internal.key.converter=org.apache.kafka.connect.storage.StringConverter
internal.value.converter=org.apache.kafka.connect.storage.StringConverter

offset.storage.file.filename=/tmp/connect.offsets

# Flush much faster than normal, which is useful for testing/debugging

offset.flush.interval.ms=10000

key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
 value.deserializer=org.apache.kafka.common.serialization.StringDeserializer

以及
快速启动-hdfs.properties:

name=hdfs-sink
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=1
topics=eightball-stuff11
hdfs.url=hdfs://localhost:9000
flush.size=3

我像这样运行hdfs连接器:
cd /home/fclvappi005561/confluent-3.0.0/bin ./connect-standalone ../etc/kafka-connect-hdfs/connect-standalone.properties ../etc/kafka-connect-hdfs/quickstart-hdfs.properties 但我有个错误:
[2016-09-12 17:19:28039]信息无法启动hdfssinkconnector:(io.confluent.connect.hdfs.hdf)ssinktask:72)org.apache.kafka.connect.errors.connectexception:org.apache.hadoop.security.accesscontrolexception:权限被拒绝:user=lvpi005561,access=write,inode=“/topics”:root:supergroup:drwxr-xr-x位于org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.check(fspermissionchecker)。java:319)在org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.check(fspermissionchecker。java:292)位于org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.checkpermission(fspermissionchecker)。java:213) 位于org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.checkpermission(fspermissionchecker)。java:190)在org.apache.hadoop.hdfs.server.namenode.fsdirectory.checkpermission(fsdirectory。java:1698)在org.apache.hadoop.hdfs.server.namenode.fsdirectory.checkpermission(fsdirectory。java:1682)在org.apache.hadoop.hdfs.server.namenode.fsdirectory.checkancestoraccess(fsdirectory。java:1665)在org.apache.hadoop.hdfs.server.namenode.fsdirmkdirop.mkdirs(fsdirmkdirop。java:71)在org.apache.hadoop.hdfs.server.namenode.fsnamesystem.mkdirs(fsnamesystem。java:3900)在org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.mkdirs(namenoderpcserver。java:978)在org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.mkdirs(clientnamenodeprotocolserversidetranslatorpb。java:622)在org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod(clientnamenodeprotocolprotos.java)位于org.apache.hadoop.ipc.protobufrpceengine$server$protobufrpinvoker.call(protobufrpceengine)。java:616)在org.apache.hadoop.ipc.rpc$server.call(rpc。java:969)在org.apache.hadoop.ipc.server$handler$1.run(服务器。java:2049)在org.apache.hadoop.ipc.server$handler$1.run(server。java:2045)位于javax.security.auth.subject.doas(subject)的java.security.accesscontroller.doprivileged(本机方法)。java:415)在org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation。java:1657)在org.apache.hadoop.ipc.server$handler.run(服务器。java:2043)at io.confluent.connect.hdfs.datawriter.(数据写入程序。java:202)在io.confluent.connect.hdfs.hdfssinktask.start(hdfssinktask。java:64)在org.apache.kafka.connect.runtime.workersinktask.initializeandstart(workersinktask。java:207)在org.apache.kafka.connect.runtime.workersinktask.execute(workersinktask)。java:139)在org.apache.kafka.connect.runtime.workertask.dorun(workertask。java:140)在org.apache.kafka.connect.runtime.workertask.run(workertask。java:175)在java.util.concurrent.executors$runnableadapter.call(executors。java:511)在java.util.concurrent.futuretask.run(futuretask。java:266)位于java.util.concurrent.threadpoolexecutor.runworker(threadpoolexecutor。java:1142)在java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor。java:617)在java.lang.thread.run(线程。java:745)原因:org.apache.hadoop.security.accesscontrolexception:权限被拒绝:用户=fclvappi005561,访问=写入,inode=“/topics”:root:supergroup:drwxr-xr-x位于org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.check(fspermissionchecker)。java:319)在org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.check(fspermissionchecker。java:292)位于org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.checkpermission(fspermissionchecker)。java:213) 位于org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.checkpermission(fspermissionchecker)。java:190)在org.apache.hadoop.hdfs.server.namenode.fsdirectory.checkpermission(fsdirectory。java:1698)在org.apache.hadoop.hdfs.server.namenode.fsdirectory.checkpermission(fsdirectory。java:1682)在org.apache.hadoop.hdfs.server.namenode.fsdirectory.checkancestoraccess(fsdirectory。java:1665)在org.apache.hadoop.hdfs.server.namenode.fsdirmkdirop.mkdirs(fsdirmkdirop。java:71)在org.apache.hadoop.hdfs.server.namenode.fsnamesystem.mkdirs(fsnamesystem。java:3900)在org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.mkdirs(namenoderpcserver。java:978)在org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.mkdirs(clientnamenodeprotocolserversidetranslatorpb。java:622)在org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod(clientnamenodeprotocolprotos.java)位于org.apache.hadoop.ipc.protobufrpceengine$server$protobufrpinvoker.call(protobufrpceengine)。java:616)在org.apache.hadoop.ipc.rpc$server.call(rpc。java:969)在org.apache.hadoop.ipc.server$handler$1.run(服务器。java:2049)在org.apache.hadoop.ipc.server$handler$1.run(server。java:2045)位于javax.security.auth.subject.doas(subject)的java.security.accesscontroller.doprivileged(本机方法)。java:415)在org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation。java:1657)在org.apache.hadoop.ipc.server$handler.run(服务器。java:2043)在sun.reflect.nativeconstructoraccessorimpl.newinstance0(本机方法)在sun.reflect.nativeconstructoraccessorimpl.newinstance(nativeconstructoraccessorimpl)。java:62)在sun.reflect.delegatingconstructoraccessorimpl.newinstance(delegatingconstructoraccessorimpl。java:45)在java.lang.reflect.constructor.newinstance(构造函数。java:423)在org.apache.hadoop.ipc.remoteexception.instanceeException(remoteexception。java:106)在org.apache.hadoop.ipc.remoteexception.unwrapremoteexception(remoteexception。java:73)在org.apache.hadoop.hdfs.dfsclient.primitivemkdir(dfsclient。java:2755)在org.apache.hadoop.hdfs.dfsclient.mkdirs(dfsclient。java:2724)在org.apache.hadoop.hdfs.distributedfilesystem$17.docall(distributedfilesystem。java:870)在org.apache.hadoop.hdfs.distributedfilesystem$17.docall(distributedfilesystem。java:866)在org.apache.hadoop.fs.filesystemlinkresolver.resolve(filesystemlinkresolver。java:81)在org.apache.hadoop.hdfs.distributedfilesystem.mkdirsinternal(分布式文件系统)。java:866)在org.apache.hadoop.hdfs.distributedfilesystem.mkdirs(distributedfilesystem。java:859)在org.apache.hadoop.fs.filesystem.mkdirs(filesystem。java:1817)at io.confluent.connect.hdfs.storage.hdfsstorage.mkdirs(hdfsstorage。java:61)在io.confluent.connect.hdfs.datawriter.createdir(datawriter。java:369)at io.confluent.connect.hdfs.datawriter.(数据写入程序。java:170) ... 10个以上原因:org.apache.hadoop.ipc.remoteexception(org.apache.hadoop.security.accesscontrolexception):权限被拒绝:user=fclvappi005561,access=write,inode=“/topics”:root:supergroup:drwxr-xr-x位于org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.check(fspermissionchecker)。java:319)在org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.check(fspermissionchecker。java:292)位于org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.checkpermission(fspermissionchecker)。java:213) 位于org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.checkpermission(fspermissionchecker)。java:190)在org.apache.hadoop.hdfs.server.namenode.fsdirectory.checkpermission(fsdirectory。java:1698)在org.apache.hadoop.hdfs.server.namenode.fsdirectory.checkpermission(fsdirectory。java:1682)在org.apache.hadoop.hdfs.server.namenode.fsdirectory.checkancestoraccess(fsdirectory。java:1665)在org.apache.hadoop.hdfs.server.namenode.fsdirmkdirop.mkdirs(fsdirmkdirop。java:71)在org.apache.hadoop.hdfs.server.namenode.fsnamesystem.mkdirs(fsnamesystem。java:3900)在org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.mkdirs(namenoderpcserver。java:978)在org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.mkdirs(clientnamenodeprotocolserversidetranslatorpb。java:622)在org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod(clientnamenodeprotocolprotos.java)位于org.apache.hadoop.ipc.protobufrpceengine$server$protobufrpinvoker.call(protobufrpceengine)。java:616)在org.apache.hadoop.ipc.rpc$server.call(rpc。java:969)在org.apache.hadoop.ipc.server$handler$1.run(服务器。java:2049)在org.apache.hadoop.ipc.server$handler$1.run(server。java:2045)位于javax.security.auth.subject.doas(subject)的java.security.accesscontroller.doprivileged(本机方法)。java:415)在org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation。java:1657)在org.apache.hadoop.ipc.server$handler.run(服务器。java:2043)在org.apache.hadoop.ipc.client.call(client。java:1468)在org.apache.hadoop.ipc.client.call(client。java:1399)在org.apache.hadoop.ipc.protobufrpceengine$invoker.invoke(protobufrpceengine。java:232)com.sun.proxy.$proxy47.mkdirs(未知源)org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocoltranslatorpb.mkdirs(clientnamenodeprotocoltranslatorpb。java:539)在sun.reflect.nativemethodaccessorimpl.invoke0(本机方法)在sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl)。java:62)在sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl。java:43)在java.lang.reflect.method.invoke(方法。java:498)在org.apache.hadoop.io.retry.retryinvocationhandler.invokemethod(retryinvocationhandler。java:187)在org.apache.hadoop.io.retry.retryinvocationhandler.invoke(retryinvocationhandler。java:102)在com.sun.proxy.$proxy48.mkdirs(未知源代码),位于org.apache.hadoop.hdfs.dfsclient.primitivemkdir(dfsclient)。java:2753) ... 20多个
我应该提到,我在127.0.0.1本地运行了hadoop的docker映像: docker run -d -p 9000:9000 sequenceiq/hadoop-docker:2.7.1 我看到的这个被拒绝许可的错误是什么?我是在一个不同的主机下提到的那些 bootstrap.servers

fcwjkofz

fcwjkofz1#

权限拒绝错误在hdfs端。用户“root”没有对hdfs目录“/topics”的写访问权。

相关问题