在pig中读取非字符串分区的配置单元表

mo49yndu  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(426)

我正在尝试使用pig从配置单元表读取数据。详情如下:
配置单元版本1.1
清管器0.12
hadoop 2.6.0版
cloudera发行版5.4.4
配置单元表架构:

map <string, string>
yyyy int
mm int
dd int

Partitions are yyyy(int), mm(int), dd(int)

清管器代码:

input_data = LOAD ‘dbname.tablename'
             USING org.apache.hive.hcatalog.pig.HCatLoader()
             ;

input_data_f = FILTER input_data BY yyyy == 2016 AND
                                      mm == 7 AND
                                      dd == 19
                                      ;

rmf input_data_dump;
STORE input_data_f INTO ‘input_data_dump';

用于运行的命令: pig -useHCatalog -f ./read_input.pig 我得到以下错误。

Error:
Pig Stack Trace
---------------
ERROR 2017: Internal error creating job configuration.

org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException: ERROR 2017: Internal error creating job configuration.
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:873)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:298)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
        at org.apache.pig.PigServer.launchPlan(PigServer.java:1334)
        at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1319)
        at org.apache.pig.PigServer.execute(PigServer.java:1309)
        at org.apache.pig.PigServer.executeBatch(PigServer.java:387)
        at org.apache.pig.PigServer.executeBatch(PigServer.java:365)
        at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
        at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
        at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
        at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
        at org.apache.pig.Main.run(Main.java:478)
        at org.apache.pig.Main.main(Main.java:156)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: MetaException(message:Filtering is supported only on partition keys of type string)
        at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:97)
        at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:61)
        at org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:125)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:498)
        ... 19 more
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_by_filter_result$get_partitions_by_filter_resultStandardScheme.read(ThriftHiveMetastore.java)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_by_filter_result$get_partitions_by_filter_resultStandardScheme.read(ThriftHiveMetastore.java)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_by_filter_result.read(ThriftHiveMetastore.java)
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions_by_filter(ThriftHiveMetastore.java:2132)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions_by_filter(ThriftHiveMetastore.java:2116)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1047)
        at org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:113)
        at org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
        at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95)
        ... 22 more

上网让我https://issues.apache.org/jira/browse/hive-7164
正在设置 hive.metastore.integral.jdo.pushdown 唯一的解决方案是hive-site.xml中的true吗?这是一个公司设置,所以我不确定是否可以对hive-site.xml进行更改,如果我让管理员进行更改,是否会有任何副作用?
尝试了以下操作:
尝试1

set hive.metastore.integral.jdo.pushdown true;

input_data = LOAD ‘dbname.tablename'
             USING org.apache.hive.hcatalog.pig.HCatLoader()
             ;

input_data_f = FILTER input_data BY yyyy == 2016 AND
                                      mm == 7 AND
                                      dd == 19
                                      ;

STORE input_data_f INTO ‘input_data_dump';

我在日志中看到:

org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, DuplicateForEachColumnRewrite, GroupByConstParallelSetter, ImplicitSplitInserter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NewPartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier, PartitionFilterOptimizer]}

尝试2

set hive.metastore.integral.jdo.pushdown true;
set pig.exec.useOldPartitionFilterOptimizer true;

input_data = LOAD ‘dbname.tablename'
             USING org.apache.hive.hcatalog.pig.HCatLoader()
             ;

input_data_f = FILTER input_data BY yyyy == 2016;
input_data_f1 = FILTER input_data_f BY mm == 7;
input_data_f2 = FILTER input_data_f1 BY dd == 19;

STORE input_data_f2 INTO ‘input_data_dump';

我在日志中看到:

org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, DuplicateForEachColumnRewrite, GroupByConstParallelSetter, ImplicitSplitInserter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier, NewPartitionFilterOptimizer]}

尝试3

set pig.exec.useOldPartitionFilterOptimizer true;

input_data = LOAD ‘dbname.tablename'
             USING org.apache.hive.hcatalog.pig.HCatLoader()
             ;

input_data_f = FILTER input_data BY yyyy == 2016;
input_data_f1 = FILTER input_data_f BY mm == 7;
input_data_f2 = FILTER input_data_f1 BY dd == 19;

STORE input_data_f2 INTO ‘input_data_dump';

我在日志中看到:

org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, DuplicateForEachColumnRewrite, GroupByConstParallelSetter, ImplicitSplitInserter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier, NewPartitionFilterOptimizer]}

通过以上尝试,我仍然得到相同的错误。
谢谢你的帮助。

wn9m85ua

wn9m85ua1#

更新:
在某些情况下,分区筛选器不会推入加载器:
在清管器0.12.0中,清管器仅将第一个过滤器推送到装载机。你会得到同样的结果,但会因此而导致性能下降。-为了解决这个问题,应该对所有分区使用一个filter语句。或者您可以指定: pig.exec.useOldPartitionFilterOptimizer=true 参见这里的deails-已知的0.12版本
对于特定于pig脚本的属性,可以使用以下选项之一:
-那个 pig.properties 文件(添加包含 pig.properties 文件到类路径)
-那个 -D 命令行选项和pig属性(pig -Dpig.tmpfilecompression=true )
-那个 -P 命令行选项和属性文件( pig -P mypig.properties )
-那个 set 命令( set pig.exec.nocombiner true )直接放在Pig脚里
有关属性的更多详细信息。
试验:铸型

$ hadoop version
Hadoop 2.6.0-cdh5.7.0

$ pig -version
Apache Pig version 0.12.0-cdh5.7.0 (rexported) 

$ cat pig_test1
-- set hive.metastore.integral.jdo.pushdown true;
input_data = LOAD 'cards.props'
             USING org.apache.hive.hcatalog.pig.HCatLoader()
             ;

input_data_f = FILTER input_data BY (chararray)yyyy == '2106' AND
                                     (chararray)mm == '8' AND
                                      (chararray)dd == '4'
                                      ;
dump input_data_f;
2016-08-04 17:15:54,541 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
([1#test1],2106,8,4)
([2#test2],2106,8,4)
([3#test3],2106,8,4)
hive> select * from props;
OK
{"1":"test1"}   2106    8   4
{"2":"test2"}   2106    8   4
{"3":"test3"}   2106    8   4

相关问题