java—使用hadoop reduce中的复合主键插入cassandra表

yyyllmsg  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(361)

我使用apachehadoop、mapreduce和cassandra运行一个mapreduce作业,该作业从一个cassandra表读入,并输出到另一个cassandra表。
我有一些工作是用一个主键输出到一个表中的。例如,此用于计算每种类型单词数的表只有一个键。

CREATE TABLE word_count(
        word text,
        count int,
        PRIMARY KEY(text)
    ) WITH COMPACT STORAGE;

关联的reduce类看起来有点像这样:

public static class ReducerToCassandra 
    extends Reducer<Text, IntWritable, ByteBuffer, List<Mutation>>
{
    public void reduce(Text word, Iterable<IntWritable> values, Context context) 
        throws IOException, InterruptedException
    {
        int sum = 0;
        for (IntWritable val : values){
            sum += val.get();
        }

        org.apache.cassandra.thrift.Column c 
                = new org.apache.cassandra.thrift.Column();
        c.setName(ByteBufferUtil.bytes("count");
        c.setValue(ByteBufferUtil.bytes(sum));
        c.setTimestamp(System.currentTimeMillis());

        Mutation mutation = new Mutation();
        mutation.setColumn_or_supercolumn(new ColumnOrSuperColumn());
        mutation.column_or_supercolumn.setColumn(c);

        ByteBuffer keyByteBuffer = ByteBufferUtil.bytes(word.toString());
        context.write(keyByteBuffer, Collections.singletonList(mutation));
    }
}

如果我想添加一个额外的列,那么我只需要在 List<Mutation> 已由输出 reduce 但我无法解决如何输出到一个在复合主键中有新列的表。例如,此表与上面的表相同,但也会将单词与其发布时间一起索引。

CREATE TABLE word_count(
        word text,
        publication_hour bigint,
        count int,
        PRIMARY KEY(word, publication_hour)
    ) WITH COMPACT STORAGE;

我尝试了几种不同的方法,比如尝试输出自定义 WritableComparable (包含一个单词和一个小时)并更新 class 以及 method 签名和 job 相应的配置,但这使得 reduce 扔一个 ClassCastException 当它试图铸造的习俗 WritableComparableByteBuffer .
我尝试用 Builder .

public static class ReducerToCassandra 
    //              MappedKey     MappedValue  ReducedKey  ReducedValues
    extends Reducer<WordHourPair, IntWritable, ByteBuffer, List<Mutation>>
{
    //                 MappedKey                  Values with the key wordHourPair
    public void reduce(WordHourPair wordHourPair, Iterable<IntWritable> values, 
    Context context) 
        throws IOException, InterruptedException
    {
        int sum = 0;
        for (IntWritable val : values){
        sum += val.get();
        }
        long hour = wordHourPair.getHourLong();

        org.apache.cassandra.thrift.Column c 
            = new org.apache.cassandra.thrift.Column();
        c.setName(ByteBufferUtil.bytes("count");
        c.setValue(ByteBufferUtil.bytes(sum));
        c.setTimestamp(System.currentTimeMillis());

        Mutation mutation = new Mutation();
        mutation.setColumn_or_supercolumn(new ColumnOrSuperColumn());
        mutation.column_or_supercolumn.setColumn(c);

        //New Code
        List<AbstractType<?>> keyTypes = new ArrayList<AbstractType<?>>(); 
        keyTypes.add(UTF8Type.instance);
        keyTypes.add(LongType.instance);
        CompositeType compositeKey = CompositeType.getInstance(keyTypes);

        Builder builder = new Builder(compositeKey);
        builder.add(ByteBufferUtil.bytes(word.toString());
        builder.add(ByteBufferUtil.bytes(hour));

        ByteBuffer keyByteBuffer = builder.build();
        context.write(keyByteBuffer, Collections.singletonList(mutation));
    }
}

但这是一个错误 IOException ```
java.io.IOException: InvalidRequestException(why:String didn't validate.)
at org.apache.cassandra.hadoop.ColumnFamilyRecordWriter$RangeClient.run(ColumnFamilyRecordWriter.java:204)
Caused by: InvalidRequestException(why:String didn't validate.)
at org.apache.cassandra.thrift.Cassandra$batch_mutate_result$batch_mutate_resultStandardScheme.read(Cassandra.java:28232)
at org.apache.cassandra.thrift.Cassandra$batch_mutate_result$batch_mutate_resultStandardScheme.read(Cassandra.java:28218)
at org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:28152)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:1069)
at org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:1055)
at org.apache.cassandra.hadoop.ColumnFamilyRecordWriter$RangeClient.run(ColumnFamilyRecordWriter.java:196)

这个问题:不是由hadoop reducer编写的cassandra cql3复合密钥似乎展示了我正在寻找的代码类型,但是它调用了 `context.write` 参数类型为 `HashMap, ByteBuffer` 我不知道该怎么做 `context.write` 接受那些参数。
如何将所需的数据(字小时键、int值)放入表中?
qyyhg6bp

qyyhg6bp1#

答案是使用cassandra的cql接口,而不是thrift api。
现在,我可以通过将reduce类的输出键/值类声明为“map,list”来写入具有复合键的表,然后为复合键创建一个Map,其中键(类型为string)是列名,值(类型为bytebuffer)是使用bytebufferutil转换为bytebuffer的列值。
例如,要写入这样定义的表:

CREATE TABLE foo (
    customer_id uuid,
    time timestamp,
    my_value int,
    PRIMARY KEY (customer_id, time)
)

我可以写:

String customerID = "the customer's id";
long time = DateTime.now().getMillis();
int myValue = 1;

Map<String, ByteBuffer> key = new Map<String, ByteBuffer>();
key.put("customer_id",ByteBufferUtil.bytes(customerID));
key.put("time",ByteBufferUtil.bytes(time));

List<ByteBuffer> values = Collections.singletonList(ByteBufferUtil.bytes(myValue));

context.write(key, values);

相关问题