output文件夹不包含任何输出

jm81lzqq  于 2021-06-01  发布在  Hadoop
关注(0)|答案(1)|浏览(384)
17/11/29 19:32:31 INFO output.FileOutputCommitter: File Output Committer Algorit
hm version is 1
17/11/29 19:32:31 INFO output.FileOutputCommitter: FileOutputCommitter skip clea
nup _temporary folders under output directory:false, ignore cleanup failures: fa
lse
17/11/29 19:32:31 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hado
op.mapreduce.lib.output.FileOutputCommitter
17/11/29 19:32:31 INFO mapred.LocalJobRunner: Waiting for map tasks
17/11/29 19:32:31 INFO mapred.LocalJobRunner: Starting task: attempt_local207220
8822_0001_m_000000_0
17/11/29 19:32:31 INFO output.FileOutputCommitter: File Output Committer Algorit
hm version is 1
17/11/29 19:32:31 INFO output.FileOutputCommitter: FileOutputCommitter skip clea
nup _temporary folders under output directory:false, ignore cleanup failures: fa
lse
17/11/29 19:32:31 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree curre
ntly is supported only on Linux.
17/11/29 19:32:32 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.a
pache.hadoop.yarn.util.WindowsBasedProcessTree@1106b4d9
17/11/29 19:32:32 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/u
ser/input/file02.txt:0+27
17/11/29 19:32:32 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
17/11/29 19:32:32 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
17/11/29 19:32:32 INFO mapred.MapTask: soft limit at 83886080
17/11/29 19:32:32 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
17/11/29 19:32:32 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
17/11/29 19:32:32 INFO mapred.MapTask: Map output collector class = org.apache.h
adoop.mapred.MapTask$MapOutputBuffer
17/11/29 19:32:32 INFO mapred.LocalJobRunner:
17/11/29 19:32:32 INFO mapred.MapTask: Starting flush of map output
17/11/29 19:32:32 INFO mapred.MapTask: Spilling map output
17/11/29 19:32:32 INFO mapred.MapTask: bufstart = 0; bufend = 44; bufvoid = 1048
57600
17/11/29 19:32:32 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26
214384(104857536); length = 13/6553600
17/11/29 19:32:32 INFO mapred.MapTask: Finished spill 0
17/11/29 19:32:32 INFO mapred.Task: Task:attempt_local2072208822_0001_m_000000_0
 is done. And is in the process of committing
17/11/29 19:32:32 INFO mapred.LocalJobRunner: map
17/11/29 19:32:32 INFO mapred.Task: Task 'attempt_local2072208822_0001_m_000000_
0' done.
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Finishing task: attempt_local20722
08822_0001_m_000000_0
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Starting task: attempt_local207220
8822_0001_m_000001_0
17/11/29 19:32:32 INFO output.FileOutputCommitter: File Output Committer Algorit
hm version is 1
17/11/29 19:32:32 INFO output.FileOutputCommitter: FileOutputCommitter skip clea
nup _temporary folders under output directory:false, ignore cleanup failures: fa
lse
17/11/29 19:32:32 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree curre
ntly is supported only on Linux.
17/11/29 19:32:32 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.a
pache.hadoop.yarn.util.WindowsBasedProcessTree@16def9af
17/11/29 19:32:32 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/u
ser/input/file01.txt:0+21
17/11/29 19:32:32 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
17/11/29 19:32:32 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
17/11/29 19:32:32 INFO mapred.MapTask: soft limit at 83886080
17/11/29 19:32:32 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
17/11/29 19:32:32 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
17/11/29 19:32:32 INFO mapred.MapTask: Map output collector class = org.apache.h
adoop.mapred.MapTask$MapOutputBuffer
17/11/29 19:32:32 INFO mapred.LocalJobRunner:
17/11/29 19:32:32 INFO mapred.MapTask: Starting flush of map output
17/11/29 19:32:32 INFO mapred.MapTask: Spilling map output
17/11/29 19:32:32 INFO mapred.MapTask: bufstart = 0; bufend = 38; bufvoid = 1048
57600
17/11/29 19:32:32 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26
214384(104857536); length = 13/6553600
17/11/29 19:32:32 INFO mapred.MapTask: Finished spill 0
17/11/29 19:32:32 INFO mapred.Task: Task:attempt_local2072208822_0001_m_000001_0
 is done. And is in the process of committing
17/11/29 19:32:32 INFO mapred.LocalJobRunner: map
17/11/29 19:32:32 INFO mapred.Task: Task 'attempt_local2072208822_0001_m_000001_
0' done.
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Finishing task: attempt_local20722
08822_0001_m_000001_0
17/11/29 19:32:32 INFO mapred.LocalJobRunner: map task executor complete.
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Waiting for reduce tasks
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Starting task: attempt_local207220
8822_0001_r_000000_0
17/11/29 19:32:32 INFO output.FileOutputCommitter: File Output Committer Algorit
hm version is 1
17/11/29 19:32:32 INFO output.FileOutputCommitter: FileOutputCommitter skip clea
nup _temporary folders under output directory:false, ignore cleanup failures: fa
lse
17/11/29 19:32:32 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree curre
ntly is supported only on Linux.
17/11/29 19:32:32 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.a
pache.hadoop.yarn.util.WindowsBasedProcessTree@42a3c42
17/11/29 19:32:32 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apach
e.hadoop.mapreduce.task.reduce.Shuffle@afbba44
17/11/29 19:32:32 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=33433
8464, maxSingleShuffleLimit=83584616, mergeThreshold=220663392, ioSortFactor=10,
 memToMemMergeOutputsThreshold=10
17/11/29 19:32:32 INFO reduce.EventFetcher: attempt_local2072208822_0001_r_00000
0_0 Thread started: EventFetcher for fetching Map Completion Events
17/11/29 19:32:32 INFO mapred.LocalJobRunner: reduce task executor complete.
17/11/29 19:32:32 INFO mapreduce.Job: Job job_local2072208822_0001 running in ub
er mode : false
17/11/29 19:32:32 INFO mapreduce.Job:  map 100% reduce 0%
17/11/29 19:32:32 WARN mapred.LocalJobRunner: job_local2072208822_0001
java.lang.Exception: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleErro
r: error in shuffle in localfetcher#1
        at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.j
ava:489)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:5
56)
Caused by: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error i
n shuffle in localfetcher#1
        at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)

        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
        at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(Lo
calJobRunner.java:346)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:51
1)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:617)
        at java.lang.Thread.run(Thread.java:748)

**Caused by: java.io.FileNotFoundException: D:/tmp/hadoop-***Semab%20Ali/mapred/local

/localRunner/Semab%20Ali/jobcache/job_local2072208822_0001/attempt_local20722088
22_0001_m_000001_0/output/file.out.index*****
        at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:
212)
        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:786)
        at org.apache.hadoop.io.SecureIOUtils.openFSDataInputStream(SecureIOUtil
s.java:155)
        at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:70)
        at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:62)
        at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:57)
        at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.copyMapOutput(Lo
calFetcher.java:125)
        at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.doCopy(LocalFetc
her.java:103)
        at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.run(LocalFetcher
.java:86)

**17/11/29 19:32:33 INFO mapreduce.Job: Job job_local2072208822_0001 failed with s

tate FAILED due to: NA**
17/11/29 19:32:33 INFO mapreduce.Job: Counters: 23
        File System Counters
                FILE: Number of bytes read=6947
                FILE: Number of bytes written=658098
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=75
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=12
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Map-Reduce Framework
                Map input records=2
                Map output records=8
                Map output bytes=82
                Map output materialized bytes=85
                Input split bytes=216
                Combine input records=8
                Combine output records=6
                Spilled Records=6
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=599261184
        File Input Format Counters
                Bytes Read=48

我是窗口用户。这是我的yarn-site.xml配置,还有一件事是,在手动运行这个项目之前,我只启动data node和name node,而不是通过start-all.cmd命令。还有什么我必须启动的吗?只是我的想法,比如资源经理之类的。
yarn-site.xml文件

<configuration>

<property>
       <name>yarn.nodemanager.aux-services</name>
       <value>mapreduce_shuffle</value>
   </property>
   <property>
       <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>

</configuration>
tpgth1q7

tpgth1q71#

生成此错误是因为您的用户名有一个空格,因此在您的目录中出现%20。
为了解决这个问题,我们执行以下步骤:
打开命令提示符(win键+r->键入“cmd”->单击“run”)
输入netplwiz
选择帐户并单击“属性”按钮
输入帐户的新名称(不带空格)
在此步骤旁边,可能会出现以下问题:有0个datanode正在运行,并且此操作中没有排除任何节点
但在windows中,您可以通过以下方式解决此问题:
删除c:\hadoop-2.8.0\data\datanode\current中的版本文件
重新启动hadoop。
删除输出目录:

hdfs dfs -rm -r /path/to/directory

好好享受。

相关问题