mapreduce查询没有在hive上执行?

exdqitrt  于 2021-06-08  发布在  Hbase
关注(0)|答案(0)|浏览(221)

我是hadoop&hive新手,每当我开始执行hive mapreduce查询时,比如select count()、avg(),或者在hbase中加载数据等等。它会显示以下错误,我在google上搜索了一下,但还没有解决方法。而其他正常的查询,如select、create、use,都工作得很好。

hive> select count(*) from test_table;
Query ID = dev4_20171016095209_43c4e980-efbd-42d3-94d4-1a4b8de3d956
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1508127394848_0001, Tracking URL = http://dev4:8088/proxy/application_1508127394848_0001/
Kill Command = /usr/local/hadoop-2.8.1//bin/hadoop job  -kill job_1508127394848_0001
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2017-10-16 09:52:38,820 Stage-1 map = 0%,  reduce = 0%
Ended Job = job_1508127394848_0001 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

hadoop-(mapred site.xml)

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
            <name>mapred.job.map.memory.mb</name>
            <value>8192</value>
    </property>
    <property>
            <name>mapred.job.reduce.memory.mb</name>
            <value>4096</value>
    </property>
    <property>
            <name>mapreduce.map.memory.mb</name>
            <value>4096</value>
    </property>
    <property>
            <name>mapreduce.reduce.memory.mb</name>
            <value>4096</value>
    </property>
    <property>
            <name>mapreduce.map.java.opts</name>
            <value>-Xmx2048M</value>
    </property>
</configuration>

hadoop-core-site.xml

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/dev4/local/hadoop-2.8.1/tmp/hadoop-${dev4}</value>
    </property>
</configuration>

hadoop-mapreduce-env.sh

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-i386/

export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000

export HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA

export HIVE_HOME=/usr/local/hive

# export HADOOP_JOB_HISTORYSERVER_OPTS=

# export HADOOP_MAPRED_LOG_DIR="" # Where log files are stored.  $HADOOP_MAPRED_HOME/logs by default.

# export HADOOP_JHS_LOGGER=INFO,RFA # Hadoop JobSummary logger.

# export HADOOP_MAPRED_PID_DIR= # The pid files are stored. /tmp by default.

# export HADOOP_MAPRED_IDENT_STRING= #A string representing this instance of hadoop. $USER by default

# export HADOOP_MAPRED_NICENESS= #The scheduling priority for daemons. Defaults to 0.

hadoop-yarn-site.xml

<configuration>
<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
</configuration>

配置单元-hive-site.xml

<configuration>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
        <description>metadata is stored in a MySQL server</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
        <description>MySQL JDBC driver class</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>hiveuser</value>
        <description>user name for connecting to mysql server</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>harileela</value>
        <description>password for connecting to mysql server</description>
    </property>
    <property>
        <name>hive.aux.jars.path</name>
        <value>file:///usr/local/hive/lib/hive-serde-1.2.2.jar</value>
        <description>The location of the plugin jars that contain implementations of user defined functions and serdes.</description>
    </property>
    <property>
        <name>hive.exec.reducers.bytes.per.reducer</name>
        <value>1000000</value>
     </property>

</configuration>

以下是我的应用程序概述:

User:   dev4
Name:   select count(*) from test_table(Stage-1)
Application Type:   MAPREDUCE
Application Tags:   
Application Priority:   0 (Higher Integer value indicates higher priority)
YarnApplicationState:   FAILED
Queue:  default
FinalStatus Reported by AM:     FAILED
Started:    Mon Oct 16 13:10:37 +0530 2017
Elapsed:    8sec
Tracking URL:   History
Log Aggregation Status:     DISABLED
Diagnostics:    
Application application_1508139045948_0002 failed 2 times due to AM Container for appattempt_1508139045948_0002_000002 exited with exitCode: 127
Failing this attempt.Diagnostics: Exception from container-launch.
Container id: container_1508139045948_0002_02_000001
Exit code: 127
Exception message: /bin/bash: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor_session.sh: No such file or directory
/home/dev4/local/hadoop-2.8.1/tmp/hadoop-${dev4}/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor.sh: line 4: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp: No such file or directory
/bin/mv: cannot stat '/home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp': No such file or directory
Stack trace: ExitCodeException exitCode=127: /bin/bash: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor_session.sh: No such file or directory
/home/dev4/local/hadoop-2.8.1/tmp/hadoop-${dev4}/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor.sh: line 4: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp: No such file or directory
/bin/mv: cannot stat '/home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp': No such file or directory
at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
at org.apache.hadoop.util.Shell.run(Shell.java:869)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:236)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:305)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:84)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 127
For more detailed output, check the application tracking page: http://dev4:8088/cluster/app/application_1508139045948_0002 Then click on links to logs of each attempt.
. Failing the application.
Unmanaged Application:  false
Application Node Label expression:  <Not set>
AM container Node Label expression:     <DEFAULT_PARTITION>

我不知道哪里出了问题?谢谢您。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题