hadoop copyfromlocal datastreamer异常

2wnc66cl  于 2021-06-03  发布在  Hadoop
关注(0)|答案(0)|浏览(211)

我使用的是hadoop0.20.203,我有一个节点为0~24的集群。cluster0用作namenode,其他所有节点当前用作datanode。
我当前正在尝试执行wordcount示例,但是当我尝试将-copyfromlocal复制到dfs中时,会显示以下消息:

aqjune@cluster0:~>> $HADOOP_HOME/bin/hadoop dfs -copyFromLocal pg132.txt /user/aqjune/input/pg132.txt
14/06/17 19:54:01 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection refused
14/06/17 19:54:01 INFO hdfs.DFSClient: Abandoning block blk_-7530678618792869516_1003
14/06/17 19:54:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection refused
14/06/17 19:54:07 INFO hdfs.DFSClient: Abandoning block blk_-7462751912508683911_1003
14/06/17 19:54:13 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection refused
14/06/17 19:54:13 INFO hdfs.DFSClient: Abandoning block blk_252255837066920011_1003
14/06/17 19:54:19 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection refused
14/06/17 19:54:19 INFO hdfs.DFSClient: Abandoning block blk_4030900909035905642_1003
14/06/17 19:54:25 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3002)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2255)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2446)

14/06/17 19:54:25 WARN hdfs.DFSClient: Error Recovery for block blk_4030900909035905642_1003 bad datanode[0] nodes == null
14/06/17 19:54:25 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/aqjune/input/pg132.txt" - Aborting...
copyFromLocal: Connection refused
14/06/17 19:54:25 ERROR hdfs.DFSClient: Exception closing file /user/aqjune/input/pg132.txt : java.net.ConnectException: Connection refused
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:406)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:3028)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2983)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2255)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2446)

然后只创建一个空文件;

aqjune@grey0:~/hadoop>> bin/hadoop dfs -lsr /
drwxr-xr-x   - aqjune supergroup          0 2014-06-17 19:45 /user
drwxr-xr-x   - aqjune supergroup          0 2014-06-17 19:45 /user/aqjune
drwxr-xr-x   - aqjune supergroup          0 2014-06-17 19:54 /user/aqjune/input
-rw-r--r--   1 aqjune supergroup          0 2014-06-17 19:54 /user/aqjune/input/pg132.txt

我想不出这个问题的原因。我能得到一些提示吗?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题