无法在单个节点上以伪分布式模式启动hbase主机

wlzqhblo  于 2021-06-03  发布在  Hadoop
关注(0)|答案(0)|浏览(226)

我刚刚开始学习hadoop。
我在cloudera cdh4上运行hadoop2.0.0,在centos上运行hbase 0.94.12。
我运行命令服务hbase master start,它只启动服务器几秒钟。
这个 jps 显示此结果:

29300 SecondaryNameNode
3354 NodeManager
3032 RunJar
2957 RunJar
2552 HRegionServer
4016 Jps
3432 ResourceManager
2312 HQuorumPeer
30345 QuorumPeerMain
29228 NameNode
2671 JobHistoryServer
29157 DataNode

此外,这是准确的错误日志

2013-10-15 08:30:35,635 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode for /hbase/backup-masters/127.0.0.1,60000,1381818635067 from backup master directory
2013-10-15 08:30:35,643 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node /hbase/backup-masters/127.0.0.1,60000,1381818635067 already deleted, and this is not a retry
2013-10-15 08:30:35,643 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Master=127.0.0.1,60000,1381818635067
2013-10-15 08:30:35,768 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.getTrimmed(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/String;
        at org.apache.hadoop.hdfs.DFSClient$Conf.<init>(DFSClient.java:306)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:421)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:410)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:128)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2142)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:80)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2176)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2158)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:302)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
        at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:667)
        at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:98)
        at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:536)
        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:396)
        at java.lang.Thread.run(Thread.java:724)
2013-10-15 08:30:35,790 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
2013-10-15 08:30:35,790 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping service threads
2013-10-15 08:30:35,791 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60000
2013-10-15 08:30:35,791 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: exiting

另外,我的hbase-site.xml配置如下:

<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://localhost:8020/hbase</value>
    </property>
    <property>
            <name>hbase.master</name>
             <value>master_hostname:60000</value>
            <description>The host and port that the HBase master runs at.</description>
        </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2182</value>
    </property>
    <property>
            <name>hbase.zookeeper.property.maxClientCnxns</name>
            <value>300</value>
    </property>
    <property>
            <name>hbase.zookeeper.property.dataDir</name>
            <value>/var/lib/zookeeper</value>
        <description>Property from ZooKeeper's config zoo.cfg.
                The directory where the snapshot is stored.
        </description>
    </property>
</configuration>

这是我的hbase-env.sh文件:


# Set environment variables here.

# This script sets variables multiple times over the course of starting an hbase process,

# so try to keep things idempotent unless you want to take an even deeper look

# into the startup scripts (bin/hbase, etc.)

# The java implementation to use.  Java 1.6 required.

export JAVA_HOME=/usr/java/jdk1.7.0_40

# Extra Java CLASSPATH elements.  Optional.

export HBASE_CLASSPATH=/etc/hadoop/conf

# The maximum amount of heap to use, in MB. Default is 1000.

export HBASE_HEAPSIZE=4096

# Extra Java runtime options.

# Below are what we set by default.  May only work with SUN JVM.

# For more on why as well as other possible settings,

# see http://wiki.apache.org/hadoop/PerformanceTuning

export HBASE_OPTS="-XX:+UseConcMarkSweepGC"

# Uncomment one of the below three options to enable java garbage collection logging for the server-side processes.

# This enables basic gc logging to the .out file.

# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"

# This enables basic gc logging to its own file.

# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .

# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"

# This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.

# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .

# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"

# Uncomment one of the below three options to enable java garbage collection logging for the client processes.

# This enables basic gc logging to the .out file.

# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"

# This enables basic gc logging to its own file.

# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .

# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"

# This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.

# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .

# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"

# Uncomment below if you intend to use the EXPERIMENTAL off heap cache.

# export HBASE_OPTS="$HBASE_OPTS -XX:MaxDirectMemorySize="

# Set hbase.offheapcache.percentage in hbase-site.xml to a nonzero value.

# Uncomment and adjust to enable JMX exporting

# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.

# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html

# 

# export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"

# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101"

# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102"

# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"

# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"

# File naming hosts on which HRegionServers will run.  $HBASE_HOME/conf/regionservers by default.

# export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers

# File naming hosts on which backup HMaster will run.  $HBASE_HOME/conf/backup-masters by default.

# export HBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters

# Extra ssh options.  Empty by default.

# export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"

# Where log files are stored.  $HBASE_HOME/logs by default.

# export HBASE_LOG_DIR=${HBASE_HOME}/logs

# Enable remote JDWP debugging of major HBase processes. Meant for Core Developers

# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070"

# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071"

# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"

# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8073"

# A string representing this instance of hbase. $USER by default.

# export HBASE_IDENT_STRING=$USER

# The scheduling priority for daemon processes.  See 'man nice'.

# export HBASE_NICENESS=10

# The directory where pid files are stored. /tmp by default.

# export HBASE_PID_DIR=/var/hadoop/pids

# Seconds to sleep between slave commands.  Unset by default.  This

# can be useful in large clusters, where, e.g., slave rsyncs can

# otherwise arrive faster than the master can service them.

# export HBASE_SLAVE_SLEEP=0.1

# Tell HBase whether it should manage it's own instance of Zookeeper or not.

export HBASE_MANAGES_ZK=true

欢迎任何帮助:)
谢谢

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题