datetime—如何使用snowflake sql根据时间戳间隔合并行?

eanckbw9  于 2021-07-24  发布在  Java
关注(0)|答案(2)|浏览(476)

我有一个存储userid、sessionid和datetime的表。该表存储用户登录到设备时的数据,并存储用户、会话和日期时间。一个userid和sessionid组合可以有多个条目。例如:

USERID | SESSIONID | DATE 
abcd   | 1234      | 2020-05-14 10:30:00 
abcd   | 1234      | 2020-05-14 10:32:00 
abcd   | 1234      | 2020-05-14 10:35:00 
abcd   | 1234      | 2020-05-14 11:32:00 
abcd   | 1234      | 2020-05-14 11:39:00

我正在尝试将这些行合并到一个新表中,该表基于同一会话和用户的initialdatetime-up-datetime+x。如果日期超过datetime+x,则初始datetime将移动。因此,如果x是30分钟,从开始到结束的任何日期时间+30分钟将是一行。如果一个日期大于datetime+30min,它将成为新的开始datetime,您将执行datetime+x,直到查看了sessionid和userid组合的所有日期。
示例表的输出应为:

USERID | SESSIONID | START_SESSION_DATE  | END_SESSION_DATE
abcd   | 1234      | 2020-05-14 10:30:00 | 2020-05-14 10:35:00 
abcd   | 1234      | 2020-05-14 11:32:00 | 2020-05-14 11:39:00

我不知道如何只用sql来实现这一点。我本来打算创建一个存储过程来执行javascript中的所有逻辑,然后在snowflake中插入到新表中,但这将非常缓慢,而且无法扩展。提前谢谢。

mitkmikd

mitkmikd1#

这有点棘手。您不能只比较相邻的行:您需要跟踪每个行序列的开始日期,以便可以将其与以下日期进行比较,并决定何时划分为新组。
这需要某种迭代过程。在sql中,这通常是通过递归查询实现的,幸运的是,snowflake支持递归查询
考虑:

with recursive 
    data as (
        select 
            t.*, 
            row_number() over(partition by userid, sessionid order by date) rn
        from mytable t
    ),
    cte as (
        select 
            userid, 
            sessionid,
            date start_session_date,
            date end_session_date
        from data
        where rn = 1
        union all
        select
            c.userid,
            c.sessionid,
            case when d.date > dateadd(minute, 30, c.start_session_date)
                then d.date
                else c.start_session_date
            end,
            d.date
        from cte c
        inner join data d 
            on  d.userid = c.userid
            and d.sessionid = c.sessionid 
            and d.rn = c.rn + 1 and 
    )
select 
    userid, 
    sessionid, 
    start_session_date, 
    max(end_session_date) end_session_date
from cte
group by userid, sessionid, start_session_date

第一个公共表表达式( data )枚举具有相同 userid 以及 sessionidrow_number() . 然后,第二个cte( cte )迭代地遍历行组,从第一个行开始,并根据需要创建新的组。最后一步是聚合。

ippsafx7

ippsafx72#

此方法利用了雪花宽度_bucket函数,根据我生成的一些测试数据,似乎工作正常:

-- Get the min amd max timestamps for each userid, sessionid
WITH T1 AS (    
SELECT USERID, SESSIONID,MIN(DATE_TIME) MIN_DATE, MAX(DATE_TIME) MAX_DATE
FROM TEST_DATA
GROUP BY USERID, SESSIONID
),
--Get the number of 'buckets', for each userid/sessionid, to divide the data into by defining the time period
--Hardcoded here as MINUTE and 30
T2 AS (
SELECT USERID, SESSIONID, MIN_DATE, MAX_DATE, CEIL(DATEDIFF(MINUTE, MIN_DATE, MAX_DATE)/30,0) NUM_BUCKETS    
FROM T1
 ),
--Assign each record to the appropriate time period bucket
--WIDTH_BUCKET takes numeric parameters hence the conversion to epoch_seconds
T3 AS (
SELECT TD.USERID, TD.SESSIONID, TD.DATE_TIME
,width_bucket(DATE_PART(EPOCH_SECONDS,TD.DATE_TIME), DATE_PART(EPOCH_SECONDS,T2.MIN_DATE), DATE_PART(EPOCH_SECONDS,T2.MAX_DATE), T2.NUM_BUCKETS) as "TIME_GROUP"
FROM TEST_DATA TD
INNER JOIN T2 ON TD.USERID = T2.USERID AND TD.SESSIONID = T2.SESSIONID
)
--Get the min and make timestamps for each userid, sessionid and bucket combination
SELECT USERID, SESSIONID, MIN(DATE_TIME), MAX(DATE_TIME)
FROM T3
GROUP BY USERID, SESSIONID, TIME_GROUP
order BY USERID, SESSIONID, TIME_GROUP
LIMIT 10
 ;

相关问题