将hql转换为sparksql

lmvvr0a8  于 2021-06-24  发布在  Hive
关注(0)|答案(0)|浏览(276)

我想把hql转换成spark。
我有以下查询(使用配置单元编辑器在色调中工作):

select reflect('java.util.UUID', 'randomUUID') as id,
    tt.employee, 
    cast( from_unixtime(unix_timestamp (date_format(current_date(),'dd/MM/yyyy HH:mm:ss'), 'dd/MM/yyyy HH:mm:ss')) as timestamp) as insert_date,
    collect_set(tt.employee_detail) as employee_details,
    collect_set( tt.emp_indication ) as employees_indications,
    named_struct ('employee_info', collect_set(tt.emp_info),
        'employee_mod_info', collect_set(tt.emp_mod_info),
        'employee_comments', collect_set(tt.emp_comment) )
        as emp_mod_details,
    from (
        select views_ctr.employee,
        if ( views_ctr.employee_details.so is not null, views_ctr.employee_details, null ) employee_detail,
        if ( views_ctr.employee_info.so is not null, views_ctr.employee_info, null ) emp_info,
        if ( views_ctr.employee_comments.so is not null, views_ctr.employee_comments, null ) emp_comment,
        if ( views_ctr.employee_mod_info.so is not null, views_ctr.employee_mod_info, null ) emp_mod_info,
        if ( views_ctr.emp_indications.so is not null, views_ctr.emp_indications, null ) employees_indication,
        from 
        ( select * from views_sta where emp_partition=0 and employee is not null ) views_ctr
        ) tt
        group by employee
        distribute by employee

首先,我想把它写进去 spark.sql 具体如下:

sparkSession.sql("select reflect('java.util.UUID', 'randomUUID') as id, tt.employee,    cast( from_unixtime(unix_timestamp (date_format(current_date(),'dd/MM/yyyy HH:mm:ss'), 'dd/MM/yyyy HH:mm:ss')) as timestamp) as insert_date,    collect_set(tt.employee_detail) as employee_details,    collect_set( tt.emp_indication ) as employees_indications,  named_struct ('employee_info', collect_set(tt.emp_info),        'employee_mod_info', collect_set(tt.emp_mod_info),      'employee_comments', collect_set(tt.emp_comment) )      as emp_mod_details, from (      select views_ctr.employee,      if ( views_ctr.employee_details.so is not null, views_ctr.employee_details, null ) employee_detail,     if ( views_ctr.employee_info.so is not null, views_ctr.employee_info, null ) emp_info,      if ( views_ctr.employee_comments.so is not null, views_ctr.employee_comments, null ) emp_comment,       if ( views_ctr.employee_mod_info.so is not null, views_ctr.employee_mod_info, null ) emp_mod_info,      if ( views_ctr.emp_indications.so is not null, views_ctr.emp_indications, null ) employees_indication,      from        ( select * from views_sta where emp_partition=0 and employee is not null ) views_ctr        ) tt        group by employee       distribute by employee")

但我有以下例外:
线程“main”org.apache.spark.sparkexception中的异常:由于阶段故障而中止作业UTE:任务不可序列化:java.io.notserializableexception:org.apache.spark.unsafe.types.utf8string$intwrapper-对象不可序列化(类:org.apache.spark.unsafe.types.utf8string$intwrapper,值:org.apache.spark.unsafe.types。utf8string$intwrapper@30cfd641)
如果我试图在没有 collect_set 函数,它可能会失败,因为我的表中的结构列类型?
如何在spark中编写hql查询/修复异常?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题