尝试在新安装的hortonworks上运行hadoop mapreduce时获取java.net.sockettimeoutexception

yr9zkbsy  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(339)

我为oraclevirtualbox安装了hortonworksversion2.3u1,我得到了一个 java.net.SocketTimeoutException 每当我尝试运行mapreduce作业时。我只改变了虚拟机可用的内存和内核。
跑步全文:

WARNING: Use "yarn jar" to launch YARN applications.  
15/09/01 01:15:17 INFO impl.TimelineClientImpl: Timeline service address: http:/                                                                                                             /sandbox.hortonworks.com:8188/ws/v1/timeline/  
15/09/01 01:15:20 INFO client.RMProxy: Connecting to ResourceManager at sandbox.                                                                                                             hortonworks.com/10.0.2.15:8050  
15/09/01 01:16:19 WARN mapreduce.JobResourceUploader: Hadoop command-line option                                                                                                              parsing not performed. Implement the Tool interface and execute your applicatio                                                                                                             n with ToolRunner to remedy this.  
15/09/01 01:18:09 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor excepti                                                                                                             on  for block BP-601678901-10.0.2.15-1439987491556:blk_1073742292_1499  
java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel                                                                                                              to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.0                                                                                                             .2.15:52924 remote=/10.0.2.15:50010]  
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.ja                                                                                                             va:164)  
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:1                                                                                                             61)  
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:1                                                                                                             31)  
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:1                                                                                                             18)  
        at java.io.FilterInputStream.read(FilterInputStream.java:83)  
        at java.io.FilterInputStream.read(FilterInputStream.java:83)  
        at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java                                                                                                             :2280)  
        at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(P                                                                                                             ipelineAck.java:244)  
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor                                                                                                             .run(DFSOutputStream.java:749)  
15/09/01 01:18:11 INFO mapreduce.JobSubmitter: Cleaning up the staging area /use                                                                                                             r/root/.staging/job_1441069639378_0001  
Exception in thread "main" java.io.IOException: All datanodes   DatanodeInfoWithStorage[10.0.2.15:50010,DS-56099a5f-3cb3-426e-8e1a-ff3b53df9bf2,DISK] are bad. Aborting...  
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1117)  
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:909)  
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:412)

我正在使用的文件ova文件的全名:sandbox\u hdp\u 2.3\u 1\u virtualbox.ova
我的主机是windows7home高级机器,有八行执行(我想是四个超线程内核)

a11xaf1n

a11xaf1n1#

问题恰恰是一个超时错误。修复方法是转到hadoop config文件夹并引发所有超时以及重试次数(尽管是从未起作用的日志中),并停止主机和访客操作系统上不必要的服务。
谢谢,在这些问题中有一个问题是指我的配置文件夹。

相关问题