cqlstorage()无法处理pig中的较高记录(pig中的cqlstorage()引发异常)

2exbekwf  于 2021-06-21  发布在  Pig
关注(0)|答案(2)|浏览(343)

我正在使用DSE3.1.2我正在使用pig将一些预处理结果存储到cql中,我已经创建了一个表并启动了我的脚本它处理了少量的数据,当增加更多的记录时它没有存储到cassandra中,只有90%或输出存储到cassandra中。
这是我的剧本

SET default_parallel 10;
result = foreach PreprocUDF generate Primary1,Primary2,col3,col4;
finalresult = foreach result generate TOTUPLE(TOTUPLE('Primary1',PreprocUDF::Primary1),TOTUPLE('Primary2',PreprocUDF::Primary2)),TOTUPLE(PreprocUDF::col3,PreprocUDF::col4);

store finalresult into 'cql://conflux/tbl_test?output_query=update+conflux.tbl_test+set+col3+%3D+%3F+,col4+%3D+%3F' using CqlStorage();

现在我得到以下错误和90%的记录被倾倒到Cassandra

ERROR - 2014-04-29 01:53:49.590; org.apache.hadoop.security.UserGroupInformation; PriviledgedActionException as:sarrajen cause:java.io.IOException: java.io.IOException: InvalidRequestException(why:Expected 8 or 0 byte long (4))
WARN  - 2014-04-29 01:53:49.590; org.apache.hadoop.mapred.Child; Error running child
java.io.IOException: java.io.IOException: InvalidRequestException(why:Expected 8 or 0 byte long (4))
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:465)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:428)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:408)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:262)
    at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:652)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:260)
Caused by: java.io.IOException: InvalidRequestException(why:Expected 8 or 0 byte long (4))
    at org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:248)
Caused by: InvalidRequestException(why:Expected 8 or 0 byte long (4))
    at org.apache.cassandra.thrift.Cassandra$execute_prepared_cql3_query_result.read(Cassandra.java:42694)
    at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
    at org.apache.cassandra.thrift.Cassandra$Client.recv_execute_prepared_cql3_query(Cassandra.java:1724)
    at org.apache.cassandra.thrift.Cassandra$Client.execute_prepared_cql3_query(Cassandra.java:1709)
    at org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:232)
INFO  - 2014-04-29 01:53:49.764; org.apache.hadoop.mapred.Task; Runnning cleanup for the task

如果我手动插入cassandra,它执行的lat记录工作正常。
如果我把最终结果放到cfs里,一切都正常

store result into '/testop'
Output(s):
Successfully stored 56347 records in: "/testop"

我还尝试将数据转储到cfs中,然后从cfs到cassandra db,这对我很有用。请告诉我哪里错了。我已经用复合键在cql中创建了表,所以我想我们不需要给出任何比较器和验证器,因为我已经为列指定了数据类型。

store result into '/testop'
x = load '/testop' as (Primary1:chararray,Primary2:long,col3:chararray,col4:long);
finalresult = foreach x generate TOTUPLE(TOTUPLE('Primary1',Primary1),TOTUPLE('Primary2',Primary2)),TOTUPLE(col3,col4);

    store finalresult into 'cql://conflux/tbl_test?output_query=update+conflux.tbl_test+set+col3+%3D+%3F+,col4+%3D+%3F' using CqlStorage();

现在这个很管用。
请告诉我哪里错了。

qybjjes1

qybjjes11#

将cassandra中的数据类型从bigint改为varint解决了上述问题。

mlnl4t2r

mlnl4t2r2#

错误原因:invalidrequestexception(why:expected 8 或0字节长(4))表示long的输入数据格式错误。输入字段中有一些坏数据。
您可以检查您的自定义项,以查看如何以正确的格式准备数据

相关问题