flume:org.apache.avro.ipc.nettyserver:来自下游的意外异常java.nio.channels.closedChannel异常

iyfamqjs  于 2021-06-02  发布在  Hadoop
关注(0)|答案(2)|浏览(559)

如何解决这个问题?当我配置flume服务器时,它有以下问题。

2014-10-20 22:24:01,480 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /ip:57063 => /ip:34001] OPEN
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /ip:57063 => /ip:34001] BOUND: /ip:34001
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /10.182.4.70:57063 => /ip:34001] CONNECTED: /ip:57063
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /ip:57063 :> /ip:34001] DISCONNECTED
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /ip:57063 :> /10.182.4.79:34001] UNBOUND
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /10.182.4.70:57063 :> /10.182.4.79:34001] CLOSED
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: Connection to /10.182.4.70:57063 disconnected.
2014-10-20 22:24:01,481 WARN org.apache.avro.ipc.NettyServer: Unexpected exception from downstream.
java.nio.channels.ClosedChannelException
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:673)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:400)
        at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:120)
        at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:59)
        at org.jboss.netty.channel.Channels.write(Channels.java:733)
        at org.jboss.netty.channel.Channels.write(Channels.java:694)
        at org.jboss.netty.handler.codec.compression.ZlibEncoder.finishEncode(ZlibEncoder.java:380)
        at org.jboss.netty.handler.codec.compression.ZlibEncoder.handleDownstream(ZlibEncoder.java:316)
        at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:55)
        at org.jboss.netty.channel.Channels.close(Channels.java:821)

flume.conf如下所示。

instance_35001.channels.channel1.checkpointDir=editlog/checkpoint
instance_35001.channels.channel1.dataDirs=editlog/data
instance_35001.channels.channel1.capacity=200000000
instance_35001.channels.channel1.transactionCapacity=1000000
instance_35001.channels.channel1.checkpointInterval=10000

instance_35001.sources=source1
instance_35001.sources.source1.type=avro
instance_35001.sources.source1.bind=0.0.0.0
instance_35001.sources.source1.port=34001
instance_35001.sources.source1.compression-type=deflate
instance_35001.sources.source1.channels=channel1

instance_35001.sources.source1.interceptors = inter1
instance_35001.sources.source1.interceptors.inter1.type = host
instance_35001.sources.source1.interceptors.inter1.hostHeader = servername

instance_35001.sinks=sink1

instance_35001.sinks.sink1.type=hdfs
instance_35001.sinks.sink1.hdfs.path=hdfs://address:5000/user/admin/%{appname}/%Y/%m/%d/
instance_35001.sinks.sink1.hdfs.filePrefix=%{appname}-%{hostname}-%{servername}.34001
instance_35001.sinks.sink1.hdfs.rollInterval=0
instance_35001.sinks.sink1.hdfs.rollCount=0
instance_35001.sinks.sink1.hdfs.rollSize=21521880492

环境是cdh5。接收器是hdfs程序。日志通常很正常。但是Flume很慢。所以请帮帮我。谢谢。

gpnt7bae

gpnt7bae1#

我在这里看到的一件事是,你的卷大小明显大于通道容量。因此,在滚动文件之前,所有内容都存储在通道中,该通道在一个点之后被填满,并开始抛出错误。

instance_35001.channels.channel1.capacity=200000000

instance_35001.sinks.sink1.hdfs.rollSize=21521880492

将角色大小保持在为hdfs设置的块大小左右。hdfs sink的默认批处理大小也是100。将其更改为更大的值并查看其行为。

jmp7cifd

jmp7cifd2#

capacity 以事件的#来衡量 rollsize 以实际字节为单位,因此很难正确关联这两个字节。
但是,您希望卷大小接近hdfs块大小(默认128mb)。

rollsize = 21521880492 -> 21GB

相关问题