在eclipse中以独立模式运行hadoop2 map reduce作业时出错?

gxwragnw  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(368)

在eclipse中运行mr作业时出现以下错误。

2014-07-10 14:07:30 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
job done
2014-07-10 14:07:30 INFO  JvmMetrics:76 - Initializing JVM Metrics with processName=JobTracker, sessionId=
2014-07-10 14:07:30 WARN  JobSubmitter:149 - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2014-07-10 14:07:30 INFO  JobSubmitter:439 - Cleaning up the staging area file:/Users/name/tmp/mapred/staging/sridhar519992773/.staging/job_local519992773_0001
2014-07-10 14:07:30 ERROR UserGroupInformation:1494 - PriviledgedActionException as:sridhar (auth:SIMPLE) cause:org.apache.hadoop.util.Shell$ExitCodeException: chmod: /Users/name/tmp/mapred/staging/sridhar519992773/.staging/job_local519992773_0001: No such file or directory

2014-07-10 14:07:30 ERROR App:43 - Error running MapReduce Job
org.apache.hadoop.util.Shell$ExitCodeException: chmod: /Users/name/tmp/mapred/staging/sridhar519992773/.staging/job_local519992773_0001: No such file or directory

    at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
    at org.apache.hadoop.util.Shell.run(Shell.java:379)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:678)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:661)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:639)
    at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:468)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:596)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:178)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:394)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
    at hadoopstandalone.standalone.App.doMapReduce(App.java:40)
    at hadoopstandalone.standalone.App.main(App.java:27)

这是我的 core-site.xml: ```


fs.default.name
file:///


hadoop.tmp.dir
/Users/sridhar/Desktop/tmp

这是我的 `mapred-site.xml:` ```
<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>local</value>
</property>
</configuration>
wz1wpwve

wz1wpwve1#

core-site.xml应该类似于`

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>

<property>
<name>hadoop.temp.dir</name>
<value>/home/myname/hdfs/temp</value>
</property>
</configuration>

hdfs-site.xml应该是这样的

<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/myname/hdfs/name</value>
</property>

<property>
<name>dfs.data.dir</name>
<value>/home/myname/hdfs/data</value>
</property>

<property>
<name>dfs.replication</name>
<value>1</value>
</property>

</configuration>

让我知道它是否有效

相关问题