NoClassDefFoundError:org/apache/hadoop/yarn/util/Clock

7gcisfzg  于 2023-11-16  发布在  Hadoop
关注(0)|答案(1)|浏览(338)

我有一些错误时运行WordCount命令:

  1. 2023-10-06 15:55:35,005 INFO mapreduce.Job: Job job_1696606856991_0001 running in uber mode: false 2023-10-06 15:55:35,006 INFO mapreduce.Job: map 0% reduce 0%
  2. 2023-10-06 15:55:35,027 INFO mapreduce.Job: Job job_1696606856991_0001 failed with state FAILED due to: Application application_1696606856991_0001 failed 2 times due to AM Container for appattempt_169 6606856991_0001_000002 exited with exitCode: 1
  3. Failing this attempt.Diagnostics: [2023-10-06 15:55:34.304] Exception from container-launch. Container id: container_1696606856991_0001_02_000001
  4. Exit code: 1
  5. [2023-10-06 15:55:34.311] Container exited with a non-zero exit code 1. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err :
  6. Last 4096 bytes of stderr :
  7. Error: Unable to initialize main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
  8. Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/util/Clock
  9. [2023-10-06 15:55:34.311] Container exited with a non-zero exit code 1. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err :
  10. Last 4096 bytes of stderr :
  11. Error: Unable to initialize main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
  12. Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/util/Clock
  13. For more detailed output, check the application tracking page: http://baoanh-master: 9004/cluster/app /application_1696606856991_0001 Then click on links to logs of each attempt.
  14. . Failing the application.
  15. 2023-10-06 15:55:35,052 INFO mapreduce.Job: Counters: 0
  16. input file1.txt:
  17. Hello World
  18. input file2.txt: Hello Hadoop
  19. wordcount output:
  20. cat: output1/part-r-00000': No such file or directory

字符串
我已经按如下方式配置了mapred和yarn文件:yarn-site.xml

  1. <configuration>
  2. <!-- Site specific YARN configuration properties -->
  3. <property>
  4. <name>yarn.nodemanager.aux-services</name>
  5. <value>mapreduce_shuffle</value>
  6. </property>
  7. <property>
  8. <name>yarn.nodemanager.env-whitelist</name>
  9. <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARNHOME,HADOOP_MAPRED_HOME</value>
  10. </property>
  11. <property>
  12. <name>yarn.resourcemanager.scheduler.address</name>
  13. <value>baoanh-master:9002</value>
  14. </property>
  15. <property>
  16. <name>yarn.resourcemanager.address</name>
  17. <value>baoanh-master:9003</value>
  18. </property>
  19. <property>
  20. <name>yarn.resourcemanager.webapp.address</name>
  21. <value>baoanh-master:9004</value>
  22. </property>
  23. <property>
  24. <name>yarn.resourcemanager.resource-tracker.address</name>
  25. <value>baoanh-master:9005</value>
  26. </property>
  27. <property>
  28. <name>yarn.resourcemanager.admin.address</name>
  29. <value>baoanh-master:9006</value>
  30. </property>
  31. </configuration>


mapred-site.xml

  1. <configuration>
  2. <property>
  3. <name>mapreduce.application.classpath</name>
  4. <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
  5. </property>
  6. <property>
  7. <name>mapreduce.jobtracker.address</name>
  8. <value>baoanh-master:9001</value>
  9. </property>
  10. <property>
  11. <name>mapreduce.framework.name</name>
  12. <value>yarn</value>
  13. </property>
  14. <property>
  15. <name>yarn.app.mapreduce.am.env</name>
  16. <value>HADOOP_MAPRED_HOME=/home/hadoopbaoanh/hadoop</value>
  17. </property>
  18. <property>
  19. <name>mapreduce.map.env</name>
  20. <value>HADOOP_MAPRED_HOME=/home/hadoopbaoanh/hadoop</value>
  21. </property>
  22. <property>
  23. <name>mapreduce.reduce.env</name>
  24. <value>HADOOP_MAPRED_HOME=/home/hadoopbaoanh/hadoop</value>
  25. </property>
  26. </configuration>


test.sh

  1. #!/bin/bash
  2. # test the hadoop cluster by running wordcount
  3. # create input files
  4. mkdir input
  5. echo "Hello World" > input/file1.txt
  6. echo "Hello Hadoop" > input/file2.txt
  7. # create input directory on HDFS
  8. hadoop fs -mkdir -p input1
  9. # put input files to HDFS
  10. hdfs dfs -put ./input/* input1
  11. # run wordcount
  12. hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-3.3.6-sources.jar org.apache.hadoop.examples.WordCount input1 output1
  13. # print the input files
  14. echo -e "\ninput file1.txt:"
  15. hdfs dfs -cat input1/file1.txt
  16. echo -e "\ninput file2.txt:"
  17. hdfs dfs -cat input1/file2.txt
  18. # print the output of wordcount
  19. echo -e "\nwordcount output:"
  20. hdfs dfs -cat output1/part-r-00000


问题:Error when run WordCount
我希望大家能帮助我!我的英语不好,所以非常抱歉!非常感谢!!

qnzebej0

qnzebej01#

也许你可以进入${HADOOP_HOME}/bin目录和$ hdfs -fs -ls /input/看看是否有一个output 1目录?

相关问题