我已经成功地启动了 flume-agent 但无法在中查看日志文件 HDFS .我走的路 twitter.conf 是:
flume-agent
HDFS
twitter.conf
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://localhost:9000/user/flume/tweets/
请帮助我摆脱这个错误,并查看我的数据库中的数据 HDFS .
o3imoua41#
如果你已经设定了 hadoop home 在 .bashrc 作为
hadoop home
.bashrc
export HADOOP_HOME=<Path to your hadoop home>
那你就不需要了 localhost:9000 在下面
localhost:9000
所以正确的路线应该是
TwitterAgent.sinks.HDFS.hdfs.path = hdfs:///user/flume/tweets/
考虑到twitter.conf如下所示,它应该可以工作
# Naming the components on the current agent. TwitterAgent.sources = Twitter TwitterAgent.channels = MemChannel TwitterAgent.sinks = HDFS # Describing/Configuring the source TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource TwitterAgent.sources.Twitter.consumerKey = Your OAuth consumer key TwitterAgent.sources.Twitter.consumerSecret = Your OAuth consumer secret TwitterAgent.sources.Twitter.accessToken = Your OAuth consumer key access token TwitterAgent.sources.Twitter.accessTokenSecret = Your OAuth consumer key access token secret TwitterAgent.sources.Twitter.keywords = tutorials point,java, bigdata, mapreduce, mahout, hbase, nosql # Describing/Configuring the sink TwitterAgent.sinks.HDFS.type = hdfs TwitterAgent.sinks.HDFS.hdfs.path = hdfs:///user/flume/tweets/ TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000 TwitterAgent.sinks.HDFS.hdfs.rollSize = 0 TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000 # Describing/Configuring the channel TwitterAgent.channels.MemChannel.type = memory TwitterAgent.channels.MemChannel.capacity = 10000 TwitterAgent.channels.MemChannel.transactionCapacity = 100 # Binding the source and sink to the channel TwitterAgent.sources.Twitter.channels = MemChannel TwitterAgent.sinks.HDFS.channel = MemChannel TwitterAgent.channels.MemChannel.type=memory
您的命令应该类似于 Flume 家
Flume
bin/flume-ng agent --conf ./conf/ -f conf/twitter.conf Dflume.root.logger=DEBUG,console -n itterAgent
您可以查看教程点以更好地理解注意:您可以通过在中查找准确的错误来调试错误 flume.log Flume日志目录
flume.log
1条答案
按热度按时间o3imoua41#
如果你已经设定了
hadoop home
在.bashrc
作为那你就不需要了
localhost:9000
在下面所以正确的路线应该是
考虑到twitter.conf如下所示,它应该可以工作
您的命令应该类似于
Flume
家您可以查看教程点以更好地理解
注意:您可以通过在中查找准确的错误来调试错误
flume.log
Flume日志目录