namenode和datanode都没有在多节点集群的主节点上启动

ua4mk5z4  于 2021-06-02  发布在  Hadoop
关注(0)|答案(0)|浏览(282)

我能够在家庭网络上的两台计算机上成功地启动单节点群集,但我无法将它们作为多节点群集启动。当我运行命令时 start-dfs.sh 我得到输出

hduser@eric-T5082:/usr/local/hadoop/sbin$ start-dfs.sh
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-eric-T5082.out
slave: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-Study-Linux.out
master: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-eric-T5082.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-eric-T5082.out

当我运行jps时,我得到以下输出:

hduser@eric-T5082:/usr/local/hadoop/sbin$ jps
The program 'jps' can be found in the following packages:
 * openjdk-7-jdk
 * openjdk-6-jdk
Try: sudo apt-get install <selected package>

然而 jps 正在为从属节点返回正确的结果:

hduser@Study-Linux:/usr/local/hadoop/etc/hadoop$ jps
6401 Jps
6300 DataNode

我怀疑这可能是由于(a)港口问题,即港口已经被占用(b) 生成临时文件并干扰 hdfs namenode -format 命令。但我已经尝试解决问题(a)通过尝试namenode的不同端口和(b)在运行之前删除临时文件 hdfs .
关于(a),以下是 netstat -l :

hduser@eric-T5082:/usr/local/hadoop/sbin$ netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 eric-T5082:domain       *:*                     LISTEN     
tcp        0      0 *:50070                 *:*                     LISTEN     
tcp        0      0 *:ssh                   *:*                     LISTEN     
tcp        0      0 localhost:ipp           *:*                     LISTEN     
tcp        0      0 *:50010                 *:*                     LISTEN     
tcp        0      0 *:50075                 *:*                     LISTEN     
tcp        0      0 *:50020                 *:*                     LISTEN     
tcp        0      0 localhost:52999         *:*                     LISTEN     
tcp        0      0 master:9000             *:*                     LISTEN     
tcp        0      0 *:50090                 *:*                     LISTEN     
tcp6       0      0 [::]:ssh                [::]:*                  LISTEN     
udp        0      0 *:36200                 *:*                                
udp        0      0 *:19057                 *:*                                
udp        0      0 *:ipp                   *:*                                
udp        0      0 eric-T5082:domain       *:*                                
udp        0      0 *:bootpc                *:*                                
udp        0      0 *:mdns                  *:*                                
udp6       0      0 [::]:mdns               [::]:*                             
udp6       0      0 [::]:46391              [::]:*                             
udp6       0      0 [::]:51513              [::]:*

这里是 core-site.xml :

<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://master:9000</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>

这里是 mapred-site.xml :

<configuration>
   <property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
</property>
</configuration>

最后, hdfs-site.xml :

<configuration>
     <property>
         <name>dfs.replication</name>
         <value>2</value>
     </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/home/hduser/mydata/hdfs/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/home/hduser/mydata/hdfs/datanode</value>
 </property>
</configuration>

hdfs似乎工作正常

hduser@eric-T5082:/usr/local/hadoop/bin$ hdfs namenode -format
15/12/21 17:09:04 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = eric-T5082/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.7.1
STARTUP_MSG:   classpath = [jar files omitted]
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG:   java = 1.7.0_91

************************************************************/

15/12/21 17:09:04 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/12/21 17:09:04 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-a8ee5a69-5938-434f-86de-57198465fb70
15/12/21 17:09:08 INFO namenode.FSNamesystem: No KeyProvider found.
15/12/21 17:09:08 INFO namenode.FSNamesystem: fsLock is fair:true
15/12/21 17:09:08 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/12/21 17:09:08 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/12/21 17:09:08 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/12/21 17:09:08 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Dec 21 17:09:08
15/12/21 17:09:08 INFO util.GSet: Computing capacity for map BlocksMap
15/12/21 17:09:08 INFO util.GSet: VM type       = 64-bit
15/12/21 17:09:08 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
15/12/21 17:09:08 INFO util.GSet: capacity      = 2^21 = 2097152 entries
15/12/21 17:09:08 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/12/21 17:09:08 INFO blockmanagement.BlockManager: defaultReplication         = 2
15/12/21 17:09:08 INFO blockmanagement.BlockManager: maxReplication             = 512
15/12/21 17:09:08 INFO blockmanagement.BlockManager: minReplication             = 1
15/12/21 17:09:08 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
15/12/21 17:09:08 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
15/12/21 17:09:08 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/12/21 17:09:08 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
15/12/21 17:09:08 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
15/12/21 17:09:08 INFO namenode.FSNamesystem: fsOwner             = hduser (auth:SIMPLE)
15/12/21 17:09:08 INFO namenode.FSNamesystem: supergroup          = supergroup
15/12/21 17:09:08 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/12/21 17:09:08 INFO namenode.FSNamesystem: HA Enabled: false
15/12/21 17:09:08 INFO namenode.FSNamesystem: Append Enabled: true
15/12/21 17:09:08 INFO util.GSet: Computing capacity for map INodeMap
15/12/21 17:09:08 INFO util.GSet: VM type       = 64-bit
15/12/21 17:09:08 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
15/12/21 17:09:08 INFO util.GSet: capacity      = 2^20 = 1048576 entries
15/12/21 17:09:08 INFO namenode.FSDirectory: ACLs enabled? false
15/12/21 17:09:08 INFO namenode.FSDirectory: XAttrs enabled? true
15/12/21 17:09:08 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
15/12/21 17:09:08 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/12/21 17:09:08 INFO util.GSet: Computing capacity for map cachedBlocks
15/12/21 17:09:08 INFO util.GSet: VM type       = 64-bit
15/12/21 17:09:08 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
15/12/21 17:09:08 INFO util.GSet: capacity      = 2^18 = 262144 entries
15/12/21 17:09:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/12/21 17:09:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/12/21 17:09:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
15/12/21 17:09:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
15/12/21 17:09:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
15/12/21 17:09:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
15/12/21 17:09:08 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/12/21 17:09:08 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/12/21 17:09:08 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/12/21 17:09:08 INFO util.GSet: VM type       = 64-bit
15/12/21 17:09:08 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
15/12/21 17:09:08 INFO util.GSet: capacity      = 2^15 = 32768 entries
15/12/21 17:09:09 INFO namenode.FSImage: Allocated new BlockPoolId: BP-923014467-127.0.1.1-1450746548917
15/12/21 17:09:09 INFO common.Storage: Storage directory /home/hduser/mydata/hdfs/namenode has been successfully formatted.
15/12/21 17:09:09 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/12/21 17:09:09 INFO util.ExitUtil: Exiting with status 0
15/12/21 17:09:09 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at eric-T5082/127.0.1.1

************************************************************/

最后,这里是namenode日志文件

2015-12-21 17:50:09,702 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = eric-T5082/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.7.1
STARTUP_MSG:   classpath = [jar files omitted]
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG:   java = 1.7.0_91

************************************************************/

2015-12-21 17:50:09,722 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-12-21 17:50:09,752 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-12-21 17:50:10,933 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-12-21 17:50:11,338 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-12-21 17:50:11,338 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-12-21 17:50:11,352 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://master:9000
2015-12-21 17:50:11,353 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use master:9000 to access this namenode/service.
2015-12-21 17:50:18,046 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-12-21 17:50:18,595 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-12-21 17:50:18,685 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2015-12-21 17:50:18,739 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-12-21 17:50:18,795 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-12-21 17:50:18,837 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-12-21 17:50:18,838 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-12-21 17:50:18,838 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-12-21 17:50:19,192 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-12-21 17:50:19,216 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-12-21 17:50:19,698 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-12-21 17:50:19,699 INFO org.mortbay.log: jetty-6.1.26
2015-12-21 17:50:21,961 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-12-21 17:50:27,119 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-12-21 17:50:27,119 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-12-21 17:50:27,277 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-12-21 17:50:27,277 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-12-21 17:50:27,385 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-12-21 17:50:27,385 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-12-21 17:50:27,388 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-12-21 17:50:27,391 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Dec 21 17:50:27
2015-12-21 17:50:27,395 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-12-21 17:50:27,396 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-21 17:50:27,399 INFO org.apache.hadoop.util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
2015-12-21 17:50:27,399 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-12-21 17:50:27,425 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-12-21 17:50:27,425 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 2
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-12-21 17:50:27,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = hduser (auth:SIMPLE)
2015-12-21 17:50:27,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-12-21 17:50:27,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-12-21 17:50:27,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-12-21 17:50:27,446 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-12-21 17:50:27,585 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-12-21 17:50:27,585 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-21 17:50:27,586 INFO org.apache.hadoop.util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
2015-12-21 17:50:27,586 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2015-12-21 17:50:27,596 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2015-12-21 17:50:27,596 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2015-12-21 17:50:27,596 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2015-12-21 17:50:27,597 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-12-21 17:50:27,624 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-12-21 17:50:27,624 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-21 17:50:27,625 INFO org.apache.hadoop.util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
2015-12-21 17:50:27,625 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2015-12-21 17:50:27,630 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-12-21 17:50:27,630 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-12-21 17:50:27,630 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-12-21 17:50:27,663 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2015-12-21 17:50:27,663 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2015-12-21 17:50:27,663 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2015-12-21 17:50:27,860 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-12-21 17:50:27,860 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-12-21 17:50:27,890 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-12-21 17:50:27,890 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-21 17:50:27,891 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
2015-12-21 17:50:27,891 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2015-12-21 17:50:27,992 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hduser/mydata/hdfs/namenode/in_use.lock acquired by nodename 20222@eric-T5082
2015-12-21 17:50:28,411 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /home/hduser/mydata/hdfs/namenode/current
2015-12-21 17:50:28,891 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hduser/mydata/hdfs/namenode/current/edits_inprogress_0000000000000000003 -> /home/hduser/mydata/hdfs/namenode/current/edits_0000000000000000003-0000000000000000003
2015-12-21 17:50:29,189 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2015-12-21 17:50:29,311 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2015-12-21 17:50:29,311 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 2 from /home/hduser/mydata/hdfs/namenode/current/fsimage_0000000000000000002
2015-12-21 17:50:29,312 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@1610d6ac expecting start txid #3
2015-12-21 17:50:29,312 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file /home/hduser/mydata/hdfs/namenode/current/edits_0000000000000000003-0000000000000000003
2015-12-21 17:50:29,319 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream '/home/hduser/mydata/hdfs/namenode/current/edits_0000000000000000003-0000000000000000003' to transaction ID 3
2015-12-21 17:50:29,333 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file /home/hduser/mydata/hdfs/namenode/current/edits_0000000000000000003-0000000000000000003 of size 1048576 edits # 1 loaded in 0 seconds
2015-12-21 17:50:29,360 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2015-12-21 17:50:29,362 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 4
2015-12-21 17:50:29,714 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2015-12-21 17:50:29,714 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 1808 msecs
2015-12-21 17:50:32,500 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to master:9000
2015-12-21 17:50:32,561 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-12-21 17:50:32,632 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
2015-12-21 17:50:32,867 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2015-12-21 17:50:32,940 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2015-12-21 17:50:32,941 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2015-12-21 17:50:32,941 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues
2015-12-21 17:50:32,948 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 5 secs
2015-12-21 17:50:32,949 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2015-12-21 17:50:32,949 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2015-12-21 17:50:32,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:33,020 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks            = 0
2015-12-21 17:50:33,020 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks          = 0
2015-12-21 17:50:33,020 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0
2015-12-21 17:50:33,020 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of  over-replicated blocks = 0
2015-12-21 17:50:33,021 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written    = 0
2015-12-21 17:50:33,021 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 61 msec
2015-12-21 17:50:33,239 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: master/192.168.1.120:9000
2015-12-21 17:50:33,239 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2015-12-21 17:50:33,230 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2015-12-21 17:50:33,234 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2015-12-21 17:50:33,281 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
2015-12-21 17:50:35,393 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.1.109:50010, datanodeUuid=e33c9d91-19c9-4e7f-85a3-e6fe5105b2d3, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-a8ee5a69-5938-434f-86de-57198465fb70;nsid=2101567829;c=0) storage e33c9d91-19c9-4e7f-85a3-e6fe5105b2d3
2015-12-21 17:50:35,394 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:35,401 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.1.109:50010
2015-12-21 17:50:35,818 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:35,818 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-b4ddb959-74db-409c-b65f-b940d01b5ec3 for DN 192.168.1.109:50010
2015-12-21 17:50:36,101 INFO BlockStateChange: BLOCK* processReport: from storage DS-b4ddb959-74db-409c-b65f-b940d01b5ec3 node DatanodeRegistration(192.168.1.109:50010, datanodeUuid=e33c9d91-19c9-4e7f-85a3-e6fe5105b2d3, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-a8ee5a69-5938-434f-86de-57198465fb70;nsid=2101567829;c=0), blocks: 0, hasStaleStorage: false, processing time: 9 msecs
2015-12-21 17:50:38,406 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.1.120:50010, datanodeUuid=ab241604-21db-4c11-91c7-5271d42f9ffa, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-a8ee5a69-5938-434f-86de-57198465fb70;nsid=2101567829;c=0) storage ab241604-21db-4c11-91c7-5271d42f9ffa
2015-12-21 17:50:38,406 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:38,407 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.1.120:50010
2015-12-21 17:50:38,560 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:38,560 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-cd7c7489-dcac-4028-ac7a-a883ad1319da for DN 192.168.1.120:50010
2015-12-21 17:50:38,666 INFO BlockStateChange: BLOCK* processReport: from storage DS-cd7c7489-dcac-4028-ac7a-a883ad1319da node DatanodeRegistration(192.168.1.120:50010, datanodeUuid=ab241604-21db-4c11-91c7-5271d42f9ffa, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-a8ee5a69-5938-434f-86de-57198465fb70;nsid=2101567829;c=0), blocks: 0, hasStaleStorage: false, processing time: 1 msecs

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题