我想在伪分布式模式下设置一个hadoop集群。我成功地执行了所有的设置步骤,包括在我的机器上启动namenode、datanode、jobtracker和tasktracker。
然后我试着运行一些示范程序,并面对 java.net.ConnectException: Connection refused
错误。我回到了在独立模式下运行某些操作的第一步,遇到了同样的问题。
我甚至对所有安装步骤进行了三次检查,不知道如何修复它(我是hadoop的新手,也是ubuntu的初学者,因此,如果提供任何指南或提示,我恳请您考虑一下。
这是我一直收到的错误输出:
hduser@marta-komputer:/usr/local/hadoop$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep input output 'dfs[a-z.]+'
15/02/22 18:23:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/02/22 18:23:04 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
java.net.ConnectException: Call From marta-komputer/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy9.delete(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:521)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.delete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1929)
at org.apache.hadoop.hdfs.DistributedFileSystem$12.doCall(DistributedFileSystem.java:638)
at org.apache.hadoop.hdfs.DistributedFileSystem$12.doCall(DistributedFileSystem.java:634)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:634)
at org.apache.hadoop.examples.Grep.run(Grep.java:95)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.Grep.main(Grep.java:101)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
... 32 more
etc/hadoop/hadoop-env.sh文件:
# The java implementation to use.
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
# The jsvc implementation to use. Jsvc is required to run secure datanodes
# that bind to privileged ports to provide authentication of data transfer
# protocol. Jsvc is not required if SASL is configured for authentication of
# data transfer protocol using non-privileged ports.
# export JSVC_HOME=${JSVC_HOME}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
# Extra Java CLASSPATH elements. Automatically insert capacity-scheduler.
for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
if [ "$HADOOP_CLASSPATH" ]; then
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
else
export HADOOP_CLASSPATH=$f
fi
done
# The maximum amount of heap to use, in MB. Default is 1000.
# export HADOOP_HEAPSIZE=
# export HADOOP_NAMENODE_INIT_HEAPSIZE=""
# Extra Java runtime options. Empty by default.
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"
# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
# HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"
# On secure datanodes, user to run the datanode as after dropping privileges.
# This**MUST**be uncommented to enable secure HDFS if using privileged ports
# to provide authentication of data transfer protocol. This**MUST NOT**be
# defined if SASL is configured for authentication of data transfer protocol
# using non-privileged ports.
export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}
# Where log files are stored. $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
# Where log files are stored in the secure data environment.
export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}
# HDFS Mover specific parameters
###
# Specify the JVM options to be used when starting the HDFS Mover.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HADOOP_MOVER_OPTS=""
###
# Advanced Users Only!
###
# The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by
# the user that will run the hadoop daemons. Otherwise there is the
# potential for a symlink attack.
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}
# A string representing this instance of hadoop. $USER by default.
export HADOOP_IDENT_STRING=$USER
.bashrc文件hadoop相关片段:
# -- HADOOP ENVIRONMENT VARIABLES START -- #
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
# -- HADOOP ENVIRONMENT VARIABLES END -- #
/usr/local/hadoop/etc/hadoop/core-site.xml文件:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop_tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
/usr/local/hadoop/etc/hadoop/hdfs-site.xml文件:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/datanode</value>
</property>
</configuration>
/usr/local/hadoop/etc/hadoop/yarn-site.xml文件:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
/usr/local/hadoop/etc/hadoop/mapred-site.xml文件:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<configuration>
跑步 hduser@marta-komputer:/usr/local/hadoop$ bin/hdfs namenode -format
结果输出如下(我用 (...)
):
hduser@marta-komputer:/usr/local/hadoop$ bin/hdfs namenode -format
15/02/22 18:50:47 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = marta-komputer/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.6.0
STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli (...)2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG: java = 1.8.0_31
************************************************************/
15/02/22 18:50:47 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/02/22 18:50:47 INFO namenode.NameNode: createNameNode [-format]
15/02/22 18:50:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-0b65621a-eab3-47a4-bfd0-62b5596a940c
15/02/22 18:50:48 INFO namenode.FSNamesystem: No KeyProvider found.
15/02/22 18:50:48 INFO namenode.FSNamesystem: fsLock is fair:true
15/02/22 18:50:48 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/02/22 18:50:48 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/02/22 18:50:48 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/02/22 18:50:48 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Feb 22 18:50:48
15/02/22 18:50:48 INFO util.GSet: Computing capacity for map BlocksMap
15/02/22 18:50:48 INFO util.GSet: VM type = 64-bit
15/02/22 18:50:48 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
15/02/22 18:50:48 INFO util.GSet: capacity = 2^21 = 2097152 entries
15/02/22 18:50:48 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/02/22 18:50:48 INFO blockmanagement.BlockManager: defaultReplication = 1
15/02/22 18:50:48 INFO blockmanagement.BlockManager: maxReplication = 512
15/02/22 18:50:48 INFO blockmanagement.BlockManager: minReplication = 1
15/02/22 18:50:48 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
15/02/22 18:50:48 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
15/02/22 18:50:48 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/02/22 18:50:48 INFO blockmanagement.BlockManager: encryptDataTransfer = false
15/02/22 18:50:48 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
15/02/22 18:50:48 INFO namenode.FSNamesystem: fsOwner = hduser (auth:SIMPLE)
15/02/22 18:50:48 INFO namenode.FSNamesystem: supergroup = supergroup
15/02/22 18:50:48 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/02/22 18:50:48 INFO namenode.FSNamesystem: HA Enabled: false
15/02/22 18:50:48 INFO namenode.FSNamesystem: Append Enabled: true
15/02/22 18:50:48 INFO util.GSet: Computing capacity for map INodeMap
15/02/22 18:50:48 INFO util.GSet: VM type = 64-bit
15/02/22 18:50:48 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/02/22 18:50:48 INFO util.GSet: capacity = 2^20 = 1048576 entries
15/02/22 18:50:48 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/02/22 18:50:48 INFO util.GSet: Computing capacity for map cachedBlocks
15/02/22 18:50:48 INFO util.GSet: VM type = 64-bit
15/02/22 18:50:48 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/02/22 18:50:48 INFO util.GSet: capacity = 2^18 = 262144 entries
15/02/22 18:50:48 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/02/22 18:50:48 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/02/22 18:50:48 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
15/02/22 18:50:48 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/02/22 18:50:48 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/02/22 18:50:48 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/02/22 18:50:48 INFO util.GSet: VM type = 64-bit
15/02/22 18:50:48 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/02/22 18:50:48 INFO util.GSet: capacity = 2^15 = 32768 entries
15/02/22 18:50:48 INFO namenode.NNConf: ACLs enabled? false
15/02/22 18:50:48 INFO namenode.NNConf: XAttrs enabled? true
15/02/22 18:50:48 INFO namenode.NNConf: Maximum size of an xattr: 16384
Re-format filesystem in Storage Directory /usr/local/hadoop_tmp/hdfs/namenode ? (Y or N) Y
15/02/22 18:50:50 INFO namenode.FSImage: Allocated new BlockPoolId: BP-948369552-127.0.1.1-1424627450316
15/02/22 18:50:50 INFO common.Storage: Storage directory /usr/local/hadoop_tmp/hdfs/namenode has been successfully formatted.
15/02/22 18:50:50 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/02/22 18:50:50 INFO util.ExitUtil: Exiting with status 0
15/02/22 18:50:50 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at marta-komputer/127.0.1.1
************************************************************/
启动 dfs
以及 yarn
结果如下:
hduser@marta-komputer:/usr/local/hadoop$ start-dfs.sh
15/02/22 18:53:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-marta-komputer.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-marta-komputer.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-marta-komputer.out
15/02/22 18:53:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hduser@marta-komputer:/usr/local/hadoop$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-marta-komputer.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-marta-komputer.out
打电话 jps
之后不久:
hduser@marta-komputer:/usr/local/hadoop$ jps
11696 ResourceManager
11842 NodeManager
11171 NameNode
11523 SecondaryNameNode
12167 Jps
netstat输出:
hduser@marta-komputer:/usr/local/hadoop$ sudo netstat -lpten | grep java
tcp 0 0 0.0.0.0:8088 0.0.0.0:* LISTEN 1001 690283 11696/java
tcp 0 0 0.0.0.0:42745 0.0.0.0:* LISTEN 1001 684574 11842/java
tcp 0 0 0.0.0.0:13562 0.0.0.0:* LISTEN 1001 680955 11842/java
tcp 0 0 0.0.0.0:8030 0.0.0.0:* LISTEN 1001 684531 11696/java
tcp 0 0 0.0.0.0:8031 0.0.0.0:* LISTEN 1001 684524 11696/java
tcp 0 0 0.0.0.0:8032 0.0.0.0:* LISTEN 1001 680879 11696/java
tcp 0 0 0.0.0.0:8033 0.0.0.0:* LISTEN 1001 687392 11696/java
tcp 0 0 0.0.0.0:8040 0.0.0.0:* LISTEN 1001 680951 11842/java
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 1001 687242 11171/java
tcp 0 0 0.0.0.0:8042 0.0.0.0:* LISTEN 1001 680956 11842/java
tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN 1001 690252 11523/java
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 1001 687239 11171/java
/etc/hosts文件:
127.0.0.1 localhost
127.0.1.1 marta-komputer
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
====================================================
更新1。
我更新了core-site.xml,现在:
<property>
<name>fs.default.name</name>
<value>hdfs://marta-komputer:9000</value>
</property>
但我一直收到错误-现在开始是:
15/03/01 00:59:34 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
java.net.ConnectException: Call From marta-komputer.home/192.168.1.8 to marta-komputer:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
我也注意到 telnet localhost 9000
不工作:
hduser@marta-komputer:~$ telnet localhost 9000
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
16条答案
按热度按时间qqrboqgw1#
在/etc/hosts中:
添加此行:
你的ip地址你的主机名
示例:192.168.1.8主机
在/etc/hosts中:
删除127.0.1.1中的行(这将导致环回)
在核心站点中,将localhost更改为ip或主机名
现在,重新启动集群。
a2mppw5e2#
对我来说,这些步骤奏效了
stop-all.sh
hadoop namenode -formatstart-all.sh
iezvtpos3#
以我的经验
您可能有64位版本的操作系统和32位的hadoop安装。参考此
此问题与ssh公钥授权有关。请提供有关ssh设置的详细信息。
请参考此链接检查完整的步骤。
如果需要,也提供信息
是否返回任何结果。
kognpnkq4#
进入$spark_home/conf,然后打开spark-env.sh文件并添加:
spark\u master\u host=您的ip
Spark\u本地\u ip=127.0.0.1
最终:ctr+s
z9ju0rcb5#
我在hortonworks也面临同样的问题
在我重新启动ambari代理和服务器时,问题已经解决。
资料来源:整篇文章
41zrol4v6#
检查防火墙设置并设置
将localhost替换为计算机名
gijlo24d7#
对我来说,这是我不能集群我的Zookeeper。
我的hadoop hdfs zkfc-[hostname].log显示:
2017-04-14 11:46:55351 warn org.apache.hadoop.ha.healthmonitor:尝试在主机/192.168.1.55:9000上监视namenode的运行状况时发生传输级别异常:java.net.connectexception:从主机/192.168.1.55到的连接拒绝调用host:9000 failed on连接异常:java.net.connectexception:连接被拒绝;有关详细信息,请参阅:http://wiki.apache.org/hadoop/connectionrefused
解决方案:
之前
之后
v2g6jxz68#
嗨,编辑conf/core-site.xml并将localhost更改为0.0.0.0。使用下面的conf。这应该管用。
2w2cym1i9#
我得到了同样的问题,发现openssh服务没有运行,这是导致问题。在启动ssh服务之后,它工作了。
要检查ssh服务是否正在运行,请执行以下操作:
要启动服务,如果已安装openssh:
sbtkgmzw10#
确保hdfs联机。开始吧
$HADOOP_HOME/sbin/start-dfs.sh
一旦你这么做了,你的测试telnet localhost 9001
应该有用。yi0zb3m411#
停止-:stop-all.sh
格式化namenode-:hadoop namenode-format
再次开始-:start-all.sh
xiozqbni12#
我与op有类似的prolem。正如终端输出建议的那样,我转到http://wiki.apache.org/hadoop/connectionrefused
我试图按照这里的建议更改我的/etc/hosts文件,即删除127.0.1.1,因为op建议它将创建另一个错误。
所以最后,我还是保持原样。以下是我的/etc/hosts
最后,我发现我的namenode没有正确启动,即当您键入
sudo netstat -lpten | grep java
在终端中,将不会有任何jvm进程在端口9000上运行(侦听)。所以我分别为namenode和datanode创建了两个目录(如果您还没有这样做的话)。你不必把它放在我放的地方,请根据你的hadoop目录替换它。即
我重新配置了hdfs-site.xml。
在终端中,停止hdfs并使用脚本进行渲染
stop-dfs.sh
以及stop-yarn.sh
. 它们位于hadoop目录/sbin中。在我的例子中,它是/home/hadoopuser/hadoop-2.6.2/sbin/。然后开始你的hdfs和Yarn脚本
start-dfs.sh
以及start-yarn.sh
启动后,键入jps
查看jvm进程是否正常运行。它应该显示以下内容。然后再次尝试使用netstat查看namenode是否正在侦听端口9000
如果您成功地设置了namenode,您应该在终端输出中看到以下内容。
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 1001 175157 14982/java
然后尝试键入命令hdfs dfs -mkdir /user/hadoopuser
如果这个命令执行成功,现在您可以在hdfs用户目录中列出您的目录hdfs dfs -ls /user
mdfafbf113#
你的问题很有趣。由于系统的复杂性和涉及的许多运动部件,hadoop的设置有时可能会令人沮丧。我认为你面临的问题肯定是防火墙问题。我的hadoop集群也有类似的设置。添加了防火墙规则的命令:
我能看到确切的问题:
您可以使用以下命令验证防火墙设置:
一旦发现可疑规则,可以使用如下命令将其删除:
现在,连接应该通过了。
bxjv4tth14#
hduser@marta-komputer:/usr/local/hadoop$jps
11696资源经理
11842节点管理器
11171名称节点
11523次要名称节点
12167日元
你的数据节点在哪里?
Connection refused
问题也可能是由于没有活动的DataNode
. 检查datanode日志中的问题。更新时间:
对于此错误:
15/03/01 00:59:34 info client.rmproxy:连接到resourcemanager at/0.0.0:8032 java.net.connectexception:调用marta komputer.home/192.168.1.8到marta-komputer:9000 failed on连接异常:java.net.connectexception:连接被拒绝;有关详细信息,请参阅:http://wiki.apache.org/hadoop/connectionrefused
在yarn-site.xml中添加以下行:
重新启动hadoop进程。
lb3vh1jj15#
从
netstat
输出你可以看到进程正在监听的地址127.0.0.1
```tcp 0 0 127.0.0.1:9000 0.0.0.0:* ...
在例外的情况下,它的结束
在这一页你可以找到
请检查/etc/hosts中是否没有Map到127.0.0.1或127.0.1.1的主机名条目(ubuntu为此而臭名昭著)
所以结论是在你的
/etc/hosts
```127.0.1.1 marta-komputer