无法使用mapreduce将数据插入hbase表

yrwegjxp  于 2021-06-09  发布在  Hbase
关注(0)|答案(1)|浏览(375)

我编写了一个map reduce作业,从文件中读取数据并将其插入hbase表。但我面临的问题是,hbase表中只插入了一条记录。我不确定这是最后一条记录还是任何随机记录,因为我的输入文件大约是10gb。根据我写的逻辑,我确信应该在表中插入数千条记录。我只分享减速机代码和驱动程序类代码,因为我很肯定,问题就在这里。请找到下面的代码:

public static class Reduce extends TableReducer<Text,Text,ImmutableBytesWritable> {

        public void reduce(Text key, Iterable<Text> values, Context context)
                throws IOException, InterruptedException {

            Set<Text> uniques = new HashSet<Text>();
            String vis=key.toString();
            String[] arr=vis.split(":");

            Put put=null;
            for (Text val : values){
                if (uniques.add(val)) {
                put = new Put(arr[0].getBytes());
                put.add(Bytes.toBytes("cf"), Bytes.toBytes("column"),Bytes.toBytes(val.toString()));

                }
                context.write(new ImmutableBytesWritable(arr[0].getBytes()), put); 
            }

        }
    }

我的驾驶员等级:

Configuration conf =  HBaseConfiguration.create();
        Job job = new Job(conf, "Blank");
        job.setJarByClass(Class_name.class);

        job.setMapperClass(Map.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(Text.class);

        job.setSortComparatorClass(CompositeKeyComprator.class);

        Scan scan = new Scan();
        scan.setCaching(500);       
        scan.setCacheBlocks(false); 

        job.setReducerClass(Reduce.class);
        TableMapReduceUtil.initTableReducerJob(
                "Table_name",
                Reduce.class,
                job);           

        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        job.waitForCompletion(true);

在控制台中运行程序后,显示reduce output records=73579,但在表中只插入1条记录。

15/06/19 16:32:41 INFO mapred.JobClient: Job complete: job_201506181703_0020
15/06/19 16:32:41 INFO mapred.JobClient: Counters: 28
15/06/19 16:32:41 INFO mapred.JobClient:   Map-Reduce Framework
15/06/19 16:32:41 INFO mapred.JobClient:     Spilled Records=147158
15/06/19 16:32:41 INFO mapred.JobClient:     Map output materialized bytes=6941462
15/06/19 16:32:41 INFO mapred.JobClient:     Reduce input records=73579
15/06/19 16:32:41 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=7614308352
15/06/19 16:32:41 INFO mapred.JobClient:     Map input records=140543
15/06/19 16:32:41 INFO mapred.JobClient:     SPLIT_RAW_BYTES=417
15/06/19 16:32:41 INFO mapred.JobClient:     Map output bytes=6794286
15/06/19 16:32:41 INFO mapred.JobClient:     Reduce shuffle bytes=6941462
15/06/19 16:32:41 INFO mapred.JobClient:     Physical memory (bytes) snapshot=892702720
15/06/19 16:32:41 INFO mapred.JobClient:     Reduce input groups=1
15/06/19 16:32:41 INFO mapred.JobClient:     Combine output records=0
15/06/19 16:32:41 INFO mapred.JobClient:     Reduce output records=73579
15/06/19 16:32:41 INFO mapred.JobClient:     Map output records=73579
15/06/19 16:32:41 INFO mapred.JobClient:     Combine input records=0
15/06/19 16:32:41 INFO mapred.JobClient:     CPU time spent (ms)=10970
15/06/19 16:32:41 INFO mapred.JobClient:     Total committed heap usage (bytes)=829947904
15/06/19 16:32:41 INFO mapred.JobClient:   File Input Format Counters
15/06/19 16:32:41 INFO mapred.JobClient:     Bytes Read=204120920
15/06/19 16:32:41 INFO mapred.JobClient:   FileSystemCounters
15/06/19 16:32:41 INFO mapred.JobClient:     HDFS_BYTES_READ=204121337
15/06/19 16:32:41 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=14198205
15/06/19 16:32:41 INFO mapred.JobClient:     FILE_BYTES_READ=6941450
15/06/19 16:32:41 INFO mapred.JobClient:   Job Counters

当我将reducer输出写入一个文件时,我得到了正确的输出,但不是在hbase表中。一定要让我知道我错过了什么。提前谢谢。

carvr3hs

carvr3hs1#

您正在使用同一列族和列限定符下的同一行键将数据插入hbase。根据你们的统计,你们只有一组。所以,所有的数据在同一个单元格中被覆盖。这就是为什么hbase表中只有一行。

相关问题