hadoopMap缩减查询

6ie5vjzr  于 2021-05-30  发布在  Hadoop
关注(0)|答案(1)|浏览(437)

我尝试使用hadoop madreduce来计算图中每个节点的所有传入边的权重之和。输入是.tsv格式,如下所示:
src tgt重量
x 102 1个
x 200 1个
x 123 5个
y 245 1年
年101 1
z 99 2号
x 145 3个
年24月1日
甲215
. . .
. . .
预期输出为:
src总和(重量)
10倍
年3
z 2级
a五
. .
. .
我使用了hadoop中的wordcount示例代码(http://www.cloudera.com/content/cloudera/en/documentation/hadoop-tutorial/cdh5/hadoop-tutorial/ht_wordcount1_source.html?scroll=topic_5_1)作为参考。我试着操纵代码,但我所有的努力都白费了。
我对java和hadoop还很陌生。我已经分享了我的代码。请帮我找出代码的错误。
谢谢。
代码:

import java.io.IOException;
import java.util.*;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;

public class Task1 {

public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable value_parsed = new IntWritable();
private Text word = new Text();

public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
Text keys = new Text();
int sum;
while(tokenizer.hasMoreTokens())
{
    tokenizer.nextToken();
    keys.set(tokenizer.nextToken());
    sum = Integer.parseInt(tokenizer.nextToken());
    output.collect(keys, new IntWritable(sum));
}
}
}

public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException    {
    int sum = 0;
    while (values.hasNext()) {
    sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}

public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(Task1.class);
conf.setJobName("Task1");

conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);

conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);

conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);

FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));

JobClient.runJob(conf);
}
}
toiithl6

toiithl61#

你得稍微修改一下代码。

while (tokenizer.hasMoreTokens()) {
        tokenizer.nextToken(); // this value is of first column
        keys.set(tokenizer.nextToken()); // this is wrong --you have to set
                                            // first column as key not
                                            // second column
        sum = Integer.parseInt(tokenizer.nextToken()); //  here 
                                                        // third column
        output.collect(keys, new IntWritable(sum));
    }

希望这对你有帮助

相关问题