贝壳Pig手稿永远挂在心头跳动

xe55xuns  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(414)

我已经在一台机器上安装了cloudera5的所有组件:name node、datanode、hue、pig、oozie、yarn、hbase。。。
我在shell中运行以下pig脚本:
sudo-u hdfs清管器
然后在Pig壳里跑

data = LOAD '/user/test/text.txt' as (text:CHARARRAY) ;

DUMP data;

脚本运行良好
但在查询编辑器/pig编辑器上运行此脚本时,它卡住了,下面是日志:

2015-09-14 14:07:06,847 [uber-SubtaskRunner] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - More information at: http://HadoopTestEnv:50030/jobdetails.jsp?jobid=job_1442214247855_0002
2015-09-14 14:07:06,884 [uber-SubtaskRunner] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - 0% complete
2015-09-14 14:07:07,512 [communication thread] INFO  org.apache.hadoop.mapred.TaskAttemptListenerImpl  - Progress of TaskAttempt attempt_1442214247855_0001_m_000000_0 is : 1.0
Heart beat
2015-09-14 14:07:37,545 [communication thread] INFO  org.apache.hadoop.mapred.TaskAttemptListenerImpl  - Progress of TaskAttempt attempt_1442214247855_0001_m_000000_0 is : 1.0
Heart beat
2015-09-14 14:08:07,571 [communication thread] INFO  org.apache.hadoop.mapred.TaskAttemptListenerImpl  - Progress of TaskAttempt attempt_1442214247855_0001_m_000000_0 is : 1.0
Heart beat

我已经使用了yarn utils脚本来支持我配置yarn-site.xml和marped-site.xml

python yarn-ulti.spy -c 6 -m 16 -d 1 -k True
 Using cores=4 memory=16GB disks=1 hbase=True
 Profile: cores=6 memory=12288MB reserved=4GB usableMem=12GB disks=1
 Num Container=3
 Container Ram=4096MB
 Used Ram=12GB
 Unused Ram=4GB
 yarn.scheduler.minimum-allocation-mb=4096
 yarn.scheduler.maximum-allocation-mb=12288
 yarn.nodemanager.resource.memory-mb=12288
 mapreduce.map.memory.mb=2048
 mapreduce.map.java.opts=-Xmx1638m
 mapreduce.reduce.memory.mb=4096
 mapreduce.reduce.java.opts=-Xmx3276m
 yarn.app.mapreduce.am.resource.mb=2048
 yarn.app.mapreduce.am.command-opts=-Xmx1638m
 mapreduce.task.io.sort.mb=819

剧本还挂着,心永远跳动着,谁来帮帮我吧!
这是我的配置:yarn-site.xml

<property>
    <name>yarn.nodemanager.resource.memory-mb</name>
        <value>12288</value>
          </property>

<property>
    <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>6</value>
          </property>

  <property>
      <name>yarn.scheduler.minimum-allocation-mb</name>
          <value>4096</value>
            </property>

  <property>
      <name>yarn.scheduler.maximum-allocation-mb</name>
          <value>12288</value>
            </property>

<property>
      <name>yarn.app.mapreduce.am.resource.mb</name>
          <value>2048</value>
            </property>

<property>
      <name>yarn.app.mapreduce.am.command-opts</name>
          <value>-Xmx1638m</value>
            </property>

mapred-site.xml文件

<property>
        <name>yarn.app.mapreduce.am.resource.mb</name>
        <value>1024</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.command-opts</name>
        <value>-Xmx768m</value>
    </property>

 <property>
     <name>yarn.app.mapreduce.am.staging-dir</name>
         <value>/user</value>
          </property>

<property>
    <name>mapreduce.map.java.opts</name>
        <value>-Xmx1638m</value>
          </property>

  <property>
      <name>mapreduce.reduce.java.opts</name>
          <value>-Xmx3276m</value>
            </property>

  <property>
      <name>mapreduce.map.memory.mb</name>
          <value>2048</value>
            </property>

  <property>
      <name>mapreduce.reduce.memory.mb</name>
          <value>4096</value>
            </property>

<property>
      <name>mapreduce.task.io.sort.mb</name>
          <value>819</value>
            </property>

<property>
    <name>mapreduce.map.cpu.vcores</name>
            <value>2</value>
                      </property>

<property>
    <name>mapreduce.reduce.cpu.vcores</name>
                <value>2</value>
 </property>
ehxuflar

ehxuflar1#

pig应用程序提交了一个oozie作业,除了脚本所做的之外,它还将使用一个mr槽。
锁定通常是由于提交死锁造成的,比如gotcha#5或者集群中只有一个可用的任务槽。

相关问题