spark java api有hadoop multipleoutputs/fsdataoutputstream这样的类吗?

vaqhlq81  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(371)

我试图在reduce部分输出一些特定的记录,这取决于键值记录的值。在hadoop中mapreduce可以使用如下代码

public void setup(Context context) throws IOException, InterruptedException {
  super.setup(context);
  Configuration conf = context.getConfiguration ();
  FileSystem fs = FileSystem.get (conf);
  int taskID = context.getTaskAttemptID().getTaskID().getId();
  hdfsOutWriter = fs.create (new Path (fileName + taskID), true); // FSDataOutputStream
}
public void reduce(Text key, Iterable<Text> value, Context context) throws IOException, InterruptedException {
  boolean isSpecificRecord = false;
  ArrayList <String> valueList = new ArrayList <String> ();
  for (Text val : value) {
    String element = val.toString ();
    if (filterFunction (element)) return;
    if (specificFunction (element)) isSpecificRecord = true;
    valueList.add (element);
  }
  String returnValue = anyFunction (valueList);
  String specificInfo = anyFunction2 (valueList);
  if (isSpecificRecord) hdfsOutWriter.writeBytes (key.toString () + "\t" + specificInfo);
  context.write (key, new Text (returnValue));
}

我想在spark cluster上运行这个进程,spark java api可以像上面的代码那样做吗?

zlhcx6iw

zlhcx6iw1#

只是一个如何模拟的想法:

yoursRDD.mapPartitions(iter => {
   val fs = FileSystem.get(new Configuration())
   val ds = fs.create(new Path("outfileName_" + TaskContext.get.partitionId))
   ds.writeBytes("Put yours results")
   ds.close()
   iter
})

相关问题