Hadoop2.2.0在jobtracker中处于可接受状态

6za6bjd0  于 2021-06-04  发布在  Hadoop
关注(0)|答案(0)|浏览(272)

我正在尝试运行一个简单的单节点hadoop设置(如1个管理器和1个工作器)。虚拟集群在这两台机器上都运行得很好。但是,当我试图将其更改为一个真正的集群时,工作就开始了,并被困在 ACCEPTED 在工作跟踪器中的状态。它根本不启动map/reduce(甚至不显示map 0%reduce 0%,只显示应用程序id,然后为空)。
我已尝试修改配置文件以使用正确的内存量。但结果总是一样的。下面是我的配置文件,以及尝试运行wordcount时的调试日志。
我删除了一些调试行,因为它们与“cloud1 sending x”/“cloud1 receiving x”有很多重复,否则日志太长,无法发布。
这两台电脑的规格如下:
comp1(管理器):8核xeon,16gb ram,2tb hdd
comp2(worker):6核xeon,8gb ram,2tb hdd
core-site.xml文件

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://comp1:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/tmp</value>
    </property>
</configuration>

hdfs-site.xml文件

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
</configuration>

mapred-site.xml文件

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>7168</value>
    </property>
    <property>
        <name>map.reduce.memory.mb</name>
        <value>14336</value>
    </property>
    <property>
        <name>mapreduce.map.java.opts</name>
        <value>5734</value>
    </property>
    <property>
        <name>mapreduce.reduce.java.opts</name>
        <value>11468</value>
    </property>
</configuration>

yarn-site.xml文件

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>biocloud1:8025</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>biocloud1:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>biocloud1:8050</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>14336</value>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>7168</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>14336</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.resource.mb</name>
        <value>9557</value>
    </property>
    <property>
        <name>yarn.app.mappreduce.am.command-opts</name>
        <value>11468</value>
    </property>
</configuration>

和调试日志:

14/07/16 14:52:11 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false, sampleName=Ops)
14/07/16 14:52:11 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false, sampleName=Ops)
14/07/16 14:52:11 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
14/07/16 14:52:11 DEBUG security.Groups:  Creating new Groups object
14/07/16 14:52:11 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
14/07/16 14:52:11 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
14/07/16 14:52:11 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution
14/07/16 14:52:11 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
14/07/16 14:52:11 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000
14/07/16 14:52:11 DEBUG security.UserGroupInformation: hadoop login
14/07/16 14:52:11 DEBUG security.UserGroupInformation: hadoop login commit
14/07/16 14:52:11 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: cloud1
14/07/16 14:52:11 DEBUG security.UserGroupInformation: UGI loginUser:cloud1 (auth:SIMPLE)
14/07/16 14:52:12 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
14/07/16 14:52:12 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
14/07/16 14:52:12 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
14/07/16 14:52:12 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = 
14/07/16 14:52:12 DEBUG impl.MetricsSystemImpl: StartupProgress, NameNode startup progress
14/07/16 14:52:12 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
14/07/16 14:52:12 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@7abc0bd
14/07/16 14:52:12 DEBUG hdfs.BlockReaderLocal: Both short-circuit local reads and UNIX domain socket are disabled.
14/07/16 14:52:12 DEBUG util.Shell: setsid exited with exit code 0
14/07/16 14:52:12 DEBUG security.UserGroupInformation: PrivilegedAction as:cloud1 (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.connect(Job.java:1233)
14/07/16 14:52:12 DEBUG mapreduce.Cluster: Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
14/07/16 14:52:12 DEBUG service.AbstractService: Service: org.apache.hadoop.mapred.ResourceMgrDelegate entered state INITED
14/07/16 14:52:12 DEBUG service.AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state INITED
14/07/16 14:52:12 DEBUG security.UserGroupInformation: PrivilegedAction as:cloud1 (auth:SIMPLE) from:org.apache.hadoop.yarn.client.RMProxy.getProxy(RMProxy.java:63)
14/07/16 14:52:12 DEBUG ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
14/07/16 14:52:12 DEBUG ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
14/07/16 14:52:12 INFO client.RMProxy: Connecting to ResourceManager at cloud1/192.168.1.1:8050
14/07/16 14:52:12 DEBUG service.AbstractService: Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl is started
14/07/16 14:52:12 DEBUG service.AbstractService: Service org.apache.hadoop.mapred.ResourceMgrDelegate is started
14/07/16 14:52:12 DEBUG security.UserGroupInformation: PrivilegedAction as:cloud1 (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:329)
14/07/16 14:52:12 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
14/07/16 14:52:12 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
14/07/16 14:52:12 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
14/07/16 14:52:12 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = 
14/07/16 14:52:12 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
14/07/16 14:52:12 DEBUG hdfs.BlockReaderLocal: Both short-circuit local reads and UNIX domain socket are disabled.
14/07/16 14:52:12 DEBUG mapreduce.Cluster: Picked org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider
14/07/16 14:52:12 DEBUG security.UserGroupInformation: PrivilegedAction as:cloud1 (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Cluster.getFileSystem(Cluster.java:161)
14/07/16 14:52:12 DEBUG security.UserGroupInformation: PrivilegedAction as:cloud1 (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
14/07/16 14:52:12 DEBUG ipc.Client: The ping interval is 60000 ms.
14/07/16 14:52:12 DEBUG ipc.Client: Connecting to cloud1/192.168.1.1:9000
14/07/16 14:52:12 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1: starting, having connections 1
14/07/16 14:52:12 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #0
14/07/16 14:52:12 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #0
14/07/16 14:52:12 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 50ms
CUT OUT TO SAVE CHARACTERS
14/07/16 14:52:12 DEBUG ipc.Client: The ping interval is 60000 ms.
14/07/16 14:52:12 DEBUG ipc.Client: Connecting to cloud1/192.168.1.1:8050
14/07/16 14:52:12 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:8050 from cloud1: starting, having connections 2
14/07/16 14:52:12 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:8050 from cloud1 sending #3
14/07/16 14:52:12 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:8050 from cloud1 got value #3
14/07/16 14:52:12 DEBUG ipc.ProtobufRpcEngine: Call: getNewApplication took 5ms
14/07/16 14:52:12 DEBUG mapreduce.JobSubmitter: Configuring job job_1405537305326_0003 with /tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003 as the submit dir
14/07/16 14:52:12 DEBUG mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:[hdfs://cloud1:9000]
14/07/16 14:52:12 DEBUG mapreduce.JobSubmitter: default FileSystem: hdfs://cloud1:9000
CUT OUT TO SAVE CHARACTERS
14/07/16 14:52:12 DEBUG ipc.ProtobufRpcEngine: Call: create took 8ms
14/07/16 14:52:12 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
14/07/16 14:52:12 DEBUG hdfs.LeaseRenewer: Lease renewer daemon for [DFSClient_NONMAPREDUCE_-110099006_1] with renew id 1 started
14/07/16 14:52:12 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=0, src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=0
14/07/16 14:52:12 DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=0, src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.jar, bytesCurBlock=65024, blockSize=134217728, appendChunk=false
14/07/16 14:52:12 DEBUG hdfs.DFSClient: Queued packet 0
14/07/16 14:52:12 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
14/07/16 14:52:12 DEBUG hdfs.DFSClient: Allocating new block
14/07/16 14:52:12 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=1, src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=65024
14/07/16 14:52:12 DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=1, src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.jar, bytesCurBlock=130048, blockSize=134217728, appendChunk=false
14/07/16 14:52:12 DEBUG hdfs.DFSClient: Queued packet 1
14/07/16 14:52:12 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
14/07/16 14:52:12 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=2, src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=130048
14/07/16 14:52:12 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #9
14/07/16 14:52:12 DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=2, src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.jar, bytesCurBlock=195072, blockSize=134217728, appendChunk=false
14/07/16 14:52:12 DEBUG hdfs.DFSClient: Queued packet 2
14/07/16 14:52:12 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
14/07/16 14:52:12 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=3, src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=195072
14/07/16 14:52:12 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #9
14/07/16 14:52:12 DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 2ms
14/07/16 14:52:12 DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=3, src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.jar, bytesCurBlock=260096, blockSize=134217728, appendChunk=false
14/07/16 14:52:12 DEBUG hdfs.DFSClient: Queued packet 3
14/07/16 14:52:12 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
14/07/16 14:52:12 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=4, src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=260096
14/07/16 14:52:12 DEBUG hdfs.DFSClient: Queued packet 4
14/07/16 14:52:12 DEBUG hdfs.DFSClient: Queued packet 5
14/07/16 14:52:12 DEBUG hdfs.DFSClient: Waiting for ack for: 5
14/07/16 14:52:12 DEBUG hdfs.DFSClient: pipeline = 192.168.1.2:50010
14/07/16 14:52:12 DEBUG hdfs.DFSClient: Connecting to datanode 192.168.1.2:50010
14/07/16 14:52:12 DEBUG hdfs.DFSClient: Send buf size 131072
14/07/16 14:52:12 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #10
14/07/16 14:52:12 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #10
14/07/16 14:52:12 DEBUG ipc.ProtobufRpcEngine: Call: getServerDefaults took 2ms
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DataStreamer block BP-200129072-127.0.0.1-1404834731381:blk_1073741932_1108 sending packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false lastByteOffsetInBlock: 65024
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DataStreamer block BP-200129072-127.0.0.1-1404834731381:blk_1073741932_1108 sending packet packet seqno:1 offsetInBlock:65024 lastPacketInBlock:false lastByteOffsetInBlock: 130048
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DataStreamer block BP-200129072-127.0.0.1-1404834731381:blk_1073741932_1108 sending packet packet seqno:2 offsetInBlock:130048 lastPacketInBlock:false lastByteOffsetInBlock: 195072
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DataStreamer block BP-200129072-127.0.0.1-1404834731381:blk_1073741932_1108 sending packet packet seqno:3 offsetInBlock:195072 lastPacketInBlock:false lastByteOffsetInBlock: 260096
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DataStreamer block BP-200129072-127.0.0.1-1404834731381:blk_1073741932_1108 sending packet packet seqno:4 offsetInBlock:260096 lastPacketInBlock:false lastByteOffsetInBlock: 270274
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS downstreamAckTimeNanos: 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS downstreamAckTimeNanos: 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient seqno: 2 status: SUCCESS downstreamAckTimeNanos: 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient seqno: 3 status: SUCCESS downstreamAckTimeNanos: 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient seqno: 4 status: SUCCESS downstreamAckTimeNanos: 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DataStreamer block BP-200129072-127.0.0.1-1404834731381:blk_1073741932_1108 sending packet packet seqno:5 offsetInBlock:270274 lastPacketInBlock:true lastByteOffsetInBlock: 270274
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient seqno: 5 status: SUCCESS downstreamAckTimeNanos: 0
CUT OUT TO SAVE CHARACTERS
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: getListing took 2ms
14/07/16 14:52:13 INFO input.FileInputFormat: Total input paths to process : 1
14/07/16 14:52:13 DEBUG input.FileInputFormat: Total # of splits: 1
14/07/16 14:52:13 DEBUG hdfs.DFSClient: /tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.split: masked=rw-r--r--
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #17
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #17
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: create took 6ms
14/07/16 14:52:13 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.split, chunkSize=516, chunksPerPacket=127, packetSize=65532
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #18
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #18
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: setPermission took 7ms
CUT OUT TO SAVE CHARACTERS
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=0, src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.split, packetSize=65532, chunksPerPacket=127, bytesCurBlock=0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Queued packet 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Queued packet 1
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Waiting for ack for: 1
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Allocating new block
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #20
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #20
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 3ms
14/07/16 14:52:13 DEBUG hdfs.DFSClient: pipeline = 192.168.1.2:50010
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Connecting to datanode 192.168.1.2:50010
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Send buf size 131072
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DataStreamer block BP-200129072-127.0.0.1-1404834731381:blk_1073741933_1109 sending packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false lastByteOffsetInBlock: 101
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS downstreamAckTimeNanos: 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DataStreamer block BP-200129072-127.0.0.1-1404834731381:blk_1073741933_1109 sending packet packet seqno:1 offsetInBlock:101 lastPacketInBlock:true lastByteOffsetInBlock: 101
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS downstreamAckTimeNanos: 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Closing old block BP-200129072-127.0.0.1-1404834731381:blk_1073741933_1109
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #21
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #21
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: complete took 12ms
14/07/16 14:52:13 DEBUG hdfs.DFSClient: /tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.splitmetainfo: masked=rw-r--r--
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #22
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #22
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: create took 8ms
14/07/16 14:52:13 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.splitmetainfo, chunkSize=516, chunksPerPacket=127, packetSize=65532
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #23
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #23
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: setPermission took 7ms
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=0, src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.splitmetainfo, packetSize=65532, chunksPerPacket=127, bytesCurBlock=0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Queued packet 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Queued packet 1
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Waiting for ack for: 1
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Allocating new block
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #24
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #24
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 4ms
14/07/16 14:52:13 DEBUG hdfs.DFSClient: pipeline = 192.168.1.2:50010
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Connecting to datanode 192.168.1.2:50010
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Send buf size 131072
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DataStreamer block BP-200129072-127.0.0.1-1404834731381:blk_1073741934_1110 sending packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false lastByteOffsetInBlock: 35
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS downstreamAckTimeNanos: 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DataStreamer block BP-200129072-127.0.0.1-1404834731381:blk_1073741934_1110 sending packet packet seqno:1 offsetInBlock:35 lastPacketInBlock:true lastByteOffsetInBlock: 35
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS downstreamAckTimeNanos: 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Closing old block BP-200129072-127.0.0.1-1404834731381:blk_1073741934_1110
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #25
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #25
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: complete took 6ms
14/07/16 14:52:13 INFO mapreduce.JobSubmitter: number of splits:1
14/07/16 14:52:13 DEBUG hdfs.DFSClient: /tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.xml: masked=rw-r--r--
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #26
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #26
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: create took 8ms
14/07/16 14:52:13 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.xml, chunkSize=516, chunksPerPacket=127, packetSize=65532
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #27
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #27
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: setPermission took 7ms

14/07/16 14:52:13 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=0, src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.xml, packetSize=65532, chunksPerPacket=127, bytesCurBlock=0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=0, src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.xml, bytesCurBlock=65024, blockSize=134217728, appendChunk=false
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Queued packet 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.xml, chunkSize=516, chunksPerPacket=127, packetSize=65532
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Allocating new block
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=1, src=/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003/job.xml, packetSize=65532, chunksPerPacket=127, bytesCurBlock=65024
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #28
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Queued packet 1
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Queued packet 2
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Waiting for ack for: 2
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #28
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 2ms
14/07/16 14:52:13 DEBUG hdfs.DFSClient: pipeline = 192.168.1.2:50010
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Connecting to datanode 192.168.1.2:50010
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Send buf size 131072
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DataStreamer block BP-200129072-127.0.0.1-1404834731381:blk_1073741935_1111 sending packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false lastByteOffsetInBlock: 65024
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DataStreamer block BP-200129072-127.0.0.1-1404834731381:blk_1073741935_1111 sending packet packet seqno:1 offsetInBlock:65024 lastPacketInBlock:false lastByteOffsetInBlock: 66592
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS downstreamAckTimeNanos: 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS downstreamAckTimeNanos: 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DataStreamer block BP-200129072-127.0.0.1-1404834731381:blk_1073741935_1111 sending packet packet seqno:2 offsetInBlock:66592 lastPacketInBlock:true lastByteOffsetInBlock: 66592
14/07/16 14:52:13 DEBUG hdfs.DFSClient: DFSClient seqno: 2 status: SUCCESS downstreamAckTimeNanos: 0
14/07/16 14:52:13 DEBUG hdfs.DFSClient: Closing old block BP-200129072-127.0.0.1-1404834731381:blk_1073741935_1111
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #29
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #29
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: complete took 7ms
14/07/16 14:52:13 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1405537305326_0003
14/07/16 14:52:13 DEBUG mapred.ClientCache: Connecting to HistoryServer at: 0.0.0.0:10020
14/07/16 14:52:13 DEBUG ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
14/07/16 14:52:13 DEBUG mapred.ClientCache: Connected to HistoryServer at: 0.0.0.0:10020
14/07/16 14:52:13 DEBUG security.UserGroupInformation: PrivilegedAction as:cloud1 (auth:SIMPLE) from:org.apache.hadoop.mapred.ClientCache.instantiateHistoryProxy(ClientCache.java:92)
14/07/16 14:52:13 DEBUG ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.mapreduce.v2.api.HSClientProtocol
14/07/16 14:52:13 DEBUG mapred.YARNRunner: AppMaster capability = <memory:9557, vCores:1>
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #30
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #30
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 3ms
14/07/16 14:52:13 DEBUG mapred.YARNRunner: Creating setup context, jobSubmitDir url is scheme: "hdfs" host: "cloud1" port: 9000 file: "/tmp/hadoop-yarn/staging/cloud1/.staging/job_1405537305326_0003"
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 sending #31
14/07/16 14:52:13 DEBUG ipc.Client: IPC Client (1398865038) connection to cloud1/192.168.1.1:9000 from cloud1 got value #31
14/07/16 14:52:13 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 2ms

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题