我在跑步 s3distcp
工作地点 AWS EMR hadoop 2.2.0
版本。而作业keep在尝试了3次之后失败了,因为reducer任务失败。我也试过两种方法:
mapred.max.reduce.failures.percent
mapreduce.reduce.failures.maxpercent
到了50岁 hadoop
动作配置和 mapred-site.xml
. 但工作还是失败了。
下面是日志:
2015-10-02 14:42:16001 info[main]org.apache.hadoop.mapreduce.job:任务id:尝试\u 1443541526464 \u 0115 \u r\u 000010 \u 2,状态:失败2015-10-02 14:42:17005 info[main]org.apache.hadoop.mapreduce.job:Map100%减少93%2015-10-02 14:42:29048 info[main]org.apache.hadoop.mapreduce.job:Map100%减少98%2015-10-02 15:04:20,369 info[main]org.apache.hadoop.mapreduce.job:map 100%reduce 100%2015-10-02 15:04:21378 info[main]org.apache.hadoop.mapreduce.job:job job_失败,状态为failed,原因是:task failed task_ \u 0115 \u r_jobfailed as tasks failed。failedmaps:0 failedreduces:1
2015-10-02 15:04:21,451 info[main]org.apache.hadoop.mapreduce.job:计数器:45文件系统计数器文件:读取字节数=280文件:写入字节数=10512783文件:读取操作数=0文件:大读取操作数=0文件:写入操作数=0 hdfs:读取字节数=32185011 hdfs:写入字节数=0 hdfs:写入字节数读取操作=170 hdfs:大读取操作数=0 hdfs:写入操作数=28个作业计数器失败reduce tasks=4个启动的map tasks=32个启动的reduce tasks=18个数据本地map tasks=15个机架本地map tasks=17所有map在占用插槽中花费的总时间(ms)=2652786所有reduce在占用插槽中花费的总时间(ms)=65506584map reduce framework map input records=156810 map output records=156810 map output bytes=30892192 map output materialized bytes=6583455 input split bytes=3904 combine input records=0 combine output records=0 reduce input groups=0 reduce shuffle bytes=7168 reduce input records=0 reduce output records=0 spilled records=156810 shuffledfailed shuffles=0 merged map outputs=448 failed shuffles=0 merged map outputs=448 gc time appeased(ms)=2524 cpu time spend(ms)=108250物理内存(字节)snapshot=14838984704虚拟内存(字节)snapshot=106769969152总提交堆使用率(字节)=18048614400 shuffle errors bad \u id=0 connection=0 io \u error=0 wrong \u length=0 wrong \u map=0wrong\u reduce=0 file input format counters bytes read=32181107 file output format counters bytes writed=0 2015-10-02 15:04:21451 info[main]com.amazon.external.elasticmapreduce.s3distcp.s3distcp:尝试递归删除hdfs:/tmp/218ad028-8035-4f97-b113-3cfea04502fc/tempspace 2015-10-02 15:04:21,515 info[main]org.apache.hadoop.io.compress.zlib.zlibfactory:已成功加载并初始化本机zlib库2015-10-02 15:04:21516 info[main]org.apache.hadoop.io.compress.codepool:获得全新的压缩程序[.deflate]2015-10-02 15:04:21,554信息[main]org.apache.hadoop.mapred.task:task:attempt_1443541526464_0114_m_000000_0 完成了。正在提交2015-10-02 15:04:21570 info[main]org.apache.hadoop.mapred.task:任务尝试\u 1443541526464 \u 0114 \u m \u 0000000允许现在提交2015-10-02 15:04:21,584 info[main]org.apache.hadoop.mapreduce.lib.output.fileoutputcommitter:任务'attempt\u 1443541526464\u 0114\u m\u000000\u 0'的保存输出到hdfs://rnd2-emr-head.ec2.int$2015-10-02 15:04:21598 info[main]org.apache.hadoop.mapred.task:任务“尝试\u 1443541526464 \u 0114 \u m \u 0000000”完成。2015-10-02 15:04:21616信息[thread-6]amazon.emr.metrics.metricssaver:inside metricssaver shutdown hook
如有任何建议,将不胜感激。
1条答案
按热度按时间svgewumm1#
你能试着打扫房间吗hdfs://tmp directory. 只需备份这个目录,因为其他一些应用程序使用tmp目录,如果遇到任何问题,可以替换tmp目录。