用apache pig进行hadoop日志分析

oug3syen  于 2021-06-04  发布在  Hadoop
关注(0)|答案(1)|浏览(350)

我有以下行的日志:

in24.inetnebr.com - - [01/Aug/1995:00:00:01 -0400] "GET /shuttle/missions/sts-68/news/sts-68-mcc-05.txt HTTP/1.0" 200 1839

第一列( in24.inetnebr.com )是主人,是第二个( 01/Aug/1995:00:00:01 -0400 )是时间戳,第三个( GET /shuttle/missions/sts-68/news/sts-68-mcc-05.txt HTTP/1.0 )是下载的页面。
如何为每台安装了pig的主机找到最后两个下载的页面?
非常感谢你的帮助!

56lgkhnf

56lgkhnf1#

我已经解决了问题,仅供参考:

REGISTER piggybank.jar
DEFINE SUBSTRING org.apache.pig.piggybank.evaluation.string.SUBSTRING();

raw = LOAD 'nasa' USING org.apache.hcatalog.pig.HCatLoader(); --cast the data, to make possible the usage of string functions

rawCasted = FOREACH raw GENERATE (chararray)host as host, (chararray)xdate as xdate,(chararray)address as address; --cut out the date, and put together the used columns

rawParsed = FOREACH rawCasted GENERATE host, SUBSTRING(xdate,1,20) as xdate, address; --make sure that the not full columns are omitted

rawFiltered = FILTER rawParsed BY xdate IS NOT NULL; --cast the timestamp to timestamp format

analysisTable = FOREACH rawFiltered GENERATE host, ToDate(xdate, 'dd/MMM/yyyy:HH:mm:ss') as xdate, address;

aTgrouped = GROUP analysisTable BY host;

resultsB = FOREACH aTgrouped {
elems=ORDER analysisTable BY xdate DESC;
two=LIMIT elems 2; --Choose the last two page

fstB=ORDER two BY xdate DESC;
fst=LIMIT fstB 1; --Choose the last page

sndB=ORDER two BY xdate ASC;
snd=LIMIT sndB 1; --Choose the previous page

GENERATE FLATTEN(group), fst.address, snd.address; --Put together the pages
};
DUMP resultsB;

相关问题