我正在尝试使用insert命令从另一个表在配置单元1.2的外部配置单元表中插入数据-
INSERT INTO perf_tech_security_detail_extn_fltr partition
(created_date)
SELECT seq_num,
action,
sde_timestamp,
instrmnt_id,
dm_lstupddt,
grnfthr_ind,
grnfthr_tl_dt,
grnfthr_frm_dt,
ftc_chge_rsn,
Substring (sde_timestamp, 0, 10)
FROM tech_security_detail_extn_fltr
WHERE Substring (sde_timestamp, 0, 10) = '2018-05-02';
但是 hive 的壳还挂着-
hive> SET hive.exec.dynamic.partition=true;
hive> set hive.exec.dynamic.partition.mode=nonstrict;
hive> set hive.enforce.bucketing=true;
hive> INSERT INTO PERF_TECH_SECURITY_DETAIL_EXTN_FLTR partition (created_date) select seq_num, action, sde_timestamp, instrmnt_id, dm_lstupddt, grnfthr_ind, grnfthr_tl_dt, grnfthr_frm_dt, ftc_chge_rsn, substring (sde_timestamp,0,10) from TECH_SECURITY_DETAIL_EXTN_FLTR where substring (sde_timestamp,0,10)='2018-05-02';
Query ID = tcs_20180503215950_585152fd-ecdc-4296-85fc-d464fef44e68
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 100
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Hive日志如下-
18-05-03 21:28:01703信息[主]:log.perflogger(perflogger。java:perflogend(148))-2018-05-03 21:28:01716错误[main]:mr.execdriver(execdriver。java:execute(400))-Yarn2018-05-03 21:28:01758信息[主要]:client.rmproxy(rmproxy。java:creatermproxy(98))-连接到resourcemanager,电话:0.0.0.0:8032 2018-05-03 21:28:01,903信息[main]:fs.fsstatspublisher(fsstatspublisher。java:init(49))-创建时间:hdfs://localhost:9000/datanode/nifi\u data/perf\u tech\u security\u detail\u extn\u fltr/.hive-staging\u hive\u 2018-05-03\u 21-27-59\u 433\u 5606951945441160381-1/-ext-10001 2018-05-03 21:28:01,960信息[main]:client.rmproxy(rmproxy。java:creatermproxy(98))-连接到resourcemanager,电话:0.0.0.0:8032 2018-05-03 21:28:01,965 info[main]:exec.utilities(实用程序。java:getbasework(389))-计划路径=hdfs://localhost:9000/tmp/hive/tcs/576b0aa3-059d-4fb2-bed8-c975781a5fce/hive\ 2018-05-03\ 21-27-59\ 433\ 5606951945441160381-1/-mr-10003/303a392c-2383-41ed-bc9d-78d37ae49f39/map.xml 2018-05-03 21:28:01,967 info[main]:exec.utilities(实用程序。java:getbasework(389))-计划路径=hdfs://localhost:9000/tmp/hive/tcs/576b0aa3-059d-4fb2-bed8-c975781a5fce/hive\u 2018-05-03\u 21-27-59\u 433\u 5606951945441160381-1/-mr-10003/303a392c-2383-41ed-bc9d-78d37ae49f39/reduce.xml 2018-05-03 21:28:22,009 info[main]:ipc.client(客户端。java:handleconnectiontimeout(832))-正在重试连接到服务器:0.0.0.0/0.0.0.0:8032。已尝试0次;maxretries=45 2018-05-03 21:28:42027信息[main]:ipc.client(客户端。java:handleconnectiontimeout(832))-正在重试连接到服务器:0.0.0.0/0.0.0.0:8032。已尝试1次;最大重试次数=45。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
我也尝试过在未分区的表中正常插入数据,但即使这样也不起作用-
INSERT INTO emp values (1 ,'ROB')
3条答案
按热度按时间mbjcgjjk1#
在集群环境中,属性yarn.resourcemanager.hostname是避免此问题的关键。这对我很有效。
使用此命令监视Yarn性能:
yarn application -list
以及yarn node -list
6mzjoqzu2#
断然的
由于framename错误,mapreduce未运行,因此已在mapred-site.xml中编辑属性mapreduce.framework.name
soat7uwm3#
我不知道为什么您没有在表名之前写表,如下所示:
编写适当的命令使其工作