如何使用hive实现预期的输出

zdwk9cvp  于 2021-06-03  发布在  Hadoop
关注(0)|答案(2)|浏览(319)

1.表1和表2是相关的,其中表1 px coo组合的时间条目显示在表2中。我需要每个px-coo组合的最后一次输入。如何使用hive实现这一点?预期输出如下所示,以供参考。

px1    coo1
px1    coo2
px1    coo3
px2    coo2
px2    coo4
px3    coo3
px4    coo4

表2

id1     2014-01-01 21:23:23,273     px1    coo1
id2     2014-01-01 22:01:22,377     px1    coo1
id3     2014-01-01 22:25:06,196     px1    coo1
id4     2014-01-01 22:51:39,487     px1    coo1
id5     2014-01-01 02:05:57,875     px1    coo2
id6     2014-01-01 02:09:42,675     px1    coo2
id7     2014-01-01 23:19:42,059     px1    coo3
id8     2014-01-01 23:34:51,782     px1    coo3
id9     2014-01-01 06:13:05,531     px2    coo2
id10    2014-01-01 06:27:36,676     px2    coo2
id11    2014-01-01 06:59:43,999     px2    coo2
id12    2014-01-01 09:21:57,325     px3    coo3
id13    2014-01-01 17:19:06,956     px4    coo4
id14    2014-01-01 17:27:05,128     px4    coo4

预期输出应为

id4     2014-01-01 22:51:39,487     px1    coo1
id6     2014-01-01 02:09:42,675     px1    coo2
id8     2014-01-01 23:34:51,782     px1    coo3
id11    2014-01-01 06:59:43,999     px2    coo2
id12    2014-01-01 09:21:57,325     px3    coo3
id14    2014-01-01 17:27:05,128     px4    coo4
wmomyfyw

wmomyfyw1#

假设你的表2,最后一列将与表2一致。(我的意思是,在这里作用于表2本身,你可以得到pix\u id的结果,coo\u id将在表2中正确匹配。)如果我的假设是错误的请原谅。

hive (sflow)> desc table2;
OK
col_name    data_type   comment
id  string  from deserializer
time_stamp  string  from deserializer
pix_id  string  from deserializer
coo_id  string  from deserializer
Time taken: 0.277 seconds

Hive(sflow)>

SELECT t2.id,t2.time_stamp,t2.pix_id,t2.coo_id
   FROM table2 t2 JOIN
        ( SELECT pix_id,coo_id, Max(UNIX_TIMESTAMP(time_stamp)) as max_epoch 
          FROM table2 
          GROUP BY pix_id,coo_id)  temp   
WHERE t2.pix_id=temp.pix_id AND t2.coo_id=temp.coo_id AND UNIX_TIMESTAMP(t2.time_stamp) = max_epoch ;

ps:复制完整的日志(请注意,我运行的是伪模式hadoop,配置单元0.9,2gb ram):

hive (sflow)> from table2 t2 join (select pix_id,coo_id, Max(UNIX_TIMESTAMP(time_stamp)) as max_epoch from table2 group by pix_id,coo_id) temp
            > select t2.id,t2.time_stamp,t2.pix_id,t2.coo_id where t2.pix_id=temp.pix_id and t2.coo_id=temp.coo_id and UNIX_TIMESTAMP(t2.time_stamp) = max_epoch ;

Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
Total MapReduce CPU Time Spent: 24 seconds 0 msec
OK
id  time_stamp  pix_id  coo_id
id4 2014-01-01 22:51:39,487 px1 coo1
id6 2014-01-01 02:09:42,675 px1 coo2
id8 2014-01-01 23:34:51,782 px1 coo3
id11    2014-01-01 06:59:43,999 px2 coo2
id12    2014-01-01 09:21:57,325 px3 coo3
id14    2014-01-01 17:27:05,128 px4 coo4
Time taken: 145.17 seconds

hive (sflow)> 
hive (sflow)> desc table2;
OK
col_name    data_type   comment
id  string  from deserializer
time_stamp  string  from deserializer
pix_id  string  from deserializer
coo_id  string  from deserializer
Time taken: 0.277 seconds
hive (sflow)>
ezykj2lf

ezykj2lf2#

你可以用 collect_max 来自砖厂的自定义项(http://github.com/klout/brickhouse )只使用一个作业步骤生成数据。

select array_index( map_keys( max_map ), 0) as id,
    from_unixtime( array_index( map_values( max_map), 0) as time_stamp,
    pix_id,
    coo_id
from (
   select pix_id, coo_id, 
       collect_max( id, unix_timestamp(time_stamp) ) as max_map
   from table2
   group by pix_id, coo_id ) cm ;

对于小数据集,这并不重要,但是对于非常大的数据集,它允许您只通过一次数据传递来解决问题。

相关问题