本文整理了Java中org.apache.hadoop.mapreduce.RecordReader.getProgress
方法的一些代码示例,展示了RecordReader.getProgress
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。RecordReader.getProgress
方法的具体详情如下:
包路径:org.apache.hadoop.mapreduce.RecordReader
类名称:RecordReader
方法名:getProgress
[英]The current progress of the record reader through its data.
[中]记录阅读器通过其数据的当前进展。
代码示例来源:origin: thinkaurelius/titan
@Override
public float getProgress() throws IOException, InterruptedException {
return reader.getProgress();
}
}
代码示例来源:origin: apache/hive
@Override public float getProgress() throws IOException {
float progress = 0.0F;
try {
progress = recordReader.getProgress();
} catch (InterruptedException e) {
throw new IOException(e);
}
return progress;
}
代码示例来源:origin: apache/hive
@Override
public float getProgress() throws IOException {
if (realReader == null) {
return 1f;
} else {
try {
return realReader.getProgress();
} catch (final InterruptedException e) {
throw new IOException(e);
}
}
}
代码示例来源:origin: apache/drill
@Override
public float getProgress() throws IOException {
if (realReader == null) {
return 1f;
} else {
try {
return realReader.getProgress();
} catch (final InterruptedException e) {
throw new IOException(e);
}
}
}
代码示例来源:origin: apache/tinkerpop
@Override
public float getProgress() throws IOException, InterruptedException {
return this.recordReader.getProgress();
}
代码示例来源:origin: apache/avro
0.0f, recordReader.getProgress(), 0.0f);
1.0f, recordReader.getProgress(), 0.0f);
代码示例来源:origin: apache/avro
0.0f, recordReader.getProgress(), 0.0f);
1.0f, recordReader.getProgress(), 0.0f);
代码示例来源:origin: com.twitter.elephantbird/elephant-bird-core
@Override
public float getProgress() throws IOException, InterruptedException {
if (recordReadersCount < 1) {
return 1f;
}
if (totalSplitLengths == 0) {
return 0f;
}
long cur = currentRecordReader == null ?
0L : (long)(currentRecordReader.getProgress() * splitLengths[currentRecordReaderIndex]);
return 1.0f * (cur + cumulativeSplitLengths[currentRecordReaderIndex]) / totalSplitLengths;
}
代码示例来源:origin: org.apache.beam/beam-sdks-java-io-hdfs
private Double getProgress() {
try {
return (double) currentReader.getProgress();
} catch (IOException | InterruptedException e) {
return null;
}
}
代码示例来源:origin: io.hops/hadoop-mapreduce-client-core
/**
* Request progress from proxied RR.
*/
public float getProgress() throws IOException, InterruptedException {
return rr.getProgress();
}
代码示例来源:origin: org.apache.giraph/giraph-core
@Override public float getProgress() throws IOException,
InterruptedException {
return recordReader.getProgress();
}
}
代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core
/**
* Request progress from proxied RR.
*/
public float getProgress() throws IOException, InterruptedException {
return rr.getProgress();
}
代码示例来源:origin: io.hops/hadoop-mapreduce-client-core
/**
* Report progress as the minimum of all child RR progress.
*/
public float getProgress() throws IOException, InterruptedException {
float ret = 1.0f;
for (RecordReader<K,? extends Writable> rr : kids) {
ret = Math.min(ret, rr.getProgress());
}
return ret;
}
代码示例来源:origin: org.apache.hadoop/hadoop-mapred
/**
* Report progress as the minimum of all child RR progress.
*/
public float getProgress() throws IOException, InterruptedException {
float ret = 1.0f;
for (RecordReader<K,? extends Writable> rr : kids) {
ret = Math.min(ret, rr.getProgress());
}
return ret;
}
代码示例来源:origin: io.prestosql.hadoop/hadoop-apache
/**
* Report progress as the minimum of all child RR progress.
*/
public float getProgress() throws IOException, InterruptedException {
float ret = 1.0f;
for (RecordReader<K,? extends Writable> rr : kids) {
ret = Math.min(ret, rr.getProgress());
}
return ret;
}
代码示例来源:origin: io.hops/hadoop-mapreduce-client-core
/**
* return progress based on the amount of data processed so far.
*/
public float getProgress() throws IOException, InterruptedException {
long subprogress = 0; // bytes processed in current split
if (null != curReader) {
// idx is always one past the current subsplit's true index.
subprogress = (long)(curReader.getProgress() * split.getLength(idx - 1));
}
return Math.min(1.0f, (progress + subprogress)/(float)(split.getLength()));
}
代码示例来源:origin: org.apache.pig/pig
@Override
public float getProgress() throws IOException, InterruptedException {
long subprogress = 0; // bytes processed in current split
if (null != curReader) {
// idx is always one past the current subsplit's true index.
subprogress = (long)(curReader.getProgress() * pigSplit.getLength(idx - 1));
}
return Math.min(1.0f, (progress + subprogress)/(float)(pigSplit.getLength()));
}
代码示例来源:origin: org.apache.hadoop/hadoop-mapred
/**
* return progress based on the amount of data processed so far.
*/
public float getProgress() throws IOException, InterruptedException {
long subprogress = 0; // bytes processed in current split
if (null != curReader) {
// idx is always one past the current subsplit's true index.
subprogress = (long)(curReader.getProgress() * split.getLength(idx - 1));
}
return Math.min(1.0f, (progress + subprogress)/(float)(split.getLength()));
}
代码示例来源:origin: org.apache.crunch/crunch-core
@Override
public float getProgress() throws IOException, InterruptedException {
float curProgress = 0; // bytes processed in current split
if (null != curReader) {
curProgress = (float)(curReader.getProgress() * getCurLength());
}
return Math.min(1.0f, (progress + curProgress)/getOverallLength());
}
代码示例来源:origin: io.prestosql.hadoop/hadoop-apache
/**
* return progress based on the amount of data processed so far.
*/
public float getProgress() throws IOException, InterruptedException {
long subprogress = 0; // bytes processed in current split
if (null != curReader) {
// idx is always one past the current subsplit's true index.
subprogress = (long)(curReader.getProgress() * split.getLength(idx - 1));
}
return Math.min(1.0f, (progress + subprogress)/(float)(split.getLength()));
}
内容来源于网络,如有侵权,请联系作者删除!