hadoop2.2 pig 0.12.1 oozie 4.1.0中的oozie pig工作流

ruyhziif  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(291)

我正在尝试执行pig oozie工作流程。但工作流挂起在运行状态,我检查了日志文件,我发现了这个
来自节点管理器的日志文件:

2015-02-25 17:50:06,322 [JobControl] INFO       org.apache.hadoop.yarn.client.api.impl.YarnClientImpl  - Submitted application application_1424690952568_0091 to ResourceManager at localhost/127.0.0.1:9003
2015-02-25 17:50:06,395 [JobControl] INFO    org.apache.hadoop.mapreduce.Job  - The url to track the job: http://localhost:8088/proxy/application_1424690952568_0091/
2015-02-25 17:50:06,396 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - HadoopJobId: job_1424690952568_0091
2015-02-25 17:50:06,396 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - Processing aliases a
2015-02-25 17:50:06,396 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - detailed locations: M: a[1,4] C:  R: 
2015-02-25 17:50:06,396 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - More information at: http://localhost:50030/jobdetails.jsp?jobid=job_1424690952568_0091
2015-02-25 17:50:06,456 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - 0% complete
Heart beat
Heart beat
Heart beat
Heart beat
Heart beat

这种情况还在继续

This is my workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.2" name="pig-example">
<start to="pig-node"/>
<action name="pig-node">
 <pig>
        <job-tracker>localhost:9003</job-tracker>
        <name-node>hdfs://localhost:9000</name-node>
        <prepare> <delete path="hdfs://localhost:9000/pigout"/></prepare>
       <configuration>
            <property>
                <name>mapred.compress.map.output</name>
                <value>true</value>
            </property>
    <property>
                <name>mapred.job.queue.name</name>
                <value>${queueName}</value>
            </property>
           </configuration>
        <script>script.pig</script>
<param>input=${INPUT}</param>
 <param>ouput=${OUTPUT}</param>
   </pig>
   <ok to="end"/>
       <error to="fail"/>
    </action>
    <kill name="fail">
         <message>Pig failed, </message>
    </kill>
    <end name="end"/>
My job.properties 

nameNode=hdfs://localhost:9000
jobTracker=localhost:9003
queueName=default
oozie.libpath=/usr/lib/oozie-4.1.0/share/lib
oozie.use.system.libpath=true
oozie.wf.application.path=${nameNode}/pigoozie
INPUT=${nameNode}/a1
OUTPUT=${nameNode}/pigout

我不知道是什么问题,我在hadoop1中执行了相同的工作流,它工作正常。我是否应该按照其他步骤在hadoop2中运行oozie?如果是,请说明这些步骤

zbdgwd5y

zbdgwd5y1#

在workflow.xml或job.properties中设置以下属性。看起来您正在单个Map器上运行该操作。oozie需要至少2个Map器。一个用于m/r发射器,一个用于实际操作。
mapred.tasktracker.map.tasks.maximum=4和mapred.tasktracker.reduce.tasks.maximum=4

相关问题