如何在map/reduce中读取csv文件?

xxe27gdn  于 2021-06-03  发布在  Hadoop
关注(0)|答案(0)|浏览(598)

我有一个大的csv文件,大小为6gb,逗号分隔。下面是mapper函数

@Override
    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String[] tokens = value.toString().split(",");

        String crimeType = tokens[5].trim();      // column #5 is the crime type in the CSV file, serving key
//      int year = Integer.parseInt(tokens[17].trim()); // the year when the crime happened

        int year = 2010;

        CrimeTypeKey crimeTypeYearKey = new CrimeTypeKey(crimeType, year);

        context.write(crimeTypeYearKey, ONE);
}

如您所见,我使用“.split”来分解每一行(或列?)。我想知道在这种情况下如何使用opencsv?请给我举个例子,非常感谢

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题