我正在尝试从googleclouddataproc中的bigtable读取数据。下面是我用来从bigdtable读取数据的代码。
PipelineOptions options = PipelineOptionsFactory.fromArgs(args).create();
options.setRunner(BlockingDataflowPipelineRunner.class);
Scan scan = new Scan();
scan.setFilter(new FirstKeyOnlyFilter());
Pipeline p = Pipeline.create(options);
p.apply(Read.from(CloudBigtableIO.read(new CloudBigtableScanConfiguration.Builder()
.withProjectId("xxxxxxxx").withZoneId("xxxxxxx")
.withClusterId("xxxxxx").withTableId("xxxxx").withScan(scan).build())))
.apply(ParDo.named("Reading data from big table").of(new DoFn<Result, Mutation>() {
@Override
public void processElement(DoFn<Result, Mutation>.ProcessContext arg0) throws Exception {
System.out.println("Inside printing");
if (arg0==null)
{
System.out.println("arg0 is null");
} else
{
System.out.println("arg0 is not null");
System.out.println(arg0.element());
}
}
}));
p.run();
每当我在我的方法中调用arg0.element()时,我都会得到下面的错误。
2017-03-21T12:29:28.884Z: Error: (deec5a839a59cbca): java.lang.ArrayIndexOutOfBoundsException: 12338
at org.apache.hadoop.hbase.KeyValue.keyToString(KeyValue.java:1231)
at org.apache.hadoop.hbase.KeyValue.keyToString(KeyValue.java:1190)
at com.google.bigtable.repackaged.com.google.cloud.hbase.adapters.read.RowCell.toString(RowCell.java:234)
at org.apache.hadoop.hbase.client.Result.toString(Result.java:804)
at java.lang.String.valueOf(String.java:2994)
at java.io.PrintStream.println(PrintStream.java:821)
at com.slb.StarterPipeline$2.processElement(StarterPipeline.java:102)
有人能告诉我我做错了什么吗。
1条答案
按热度按时间ipakzgxi1#
不幸的是,这是一个已知的问题。我们修复了底层的实现,我们希望在下周左右发布一个新版本的客户端。我建议改变这一行:
System.out.println(arg0.element());
例如:System.out.println(Bytes.toStringBinary(arg0.element().getRow());
很抱歉给你添麻烦。