本文整理了Java中org.apache.spark.rdd.RDD.partitions
方法的一些代码示例,展示了RDD.partitions
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。RDD.partitions
方法的具体详情如下:
包路径:org.apache.spark.rdd.RDD
类名称:RDD
方法名:partitions
暂无
代码示例来源:origin: DataSystemsLab/GeoSpark
public boolean spatialPartitioning(GridType gridType)
throws Exception
{
int numPartitions = this.rawSpatialRDD.rdd().partitions().length;
spatialPartitioning(gridType, numPartitions);
return true;
}
代码示例来源:origin: org.datasyslab/geospark
public boolean spatialPartitioning(GridType gridType)
throws Exception
{
int numPartitions = this.rawSpatialRDD.rdd().partitions().length;
spatialPartitioning(gridType, numPartitions);
return true;
}
代码示例来源:origin: org.apache.pig/pig
if (physicalOpRdds.get(poStore.getOperatorKey()).partitions().length == 0) {
sparkStats.addJobStats(poStore, sparkOperator, NULLPART_JOB_ID, null, sparkContext);
return;
代码示例来源:origin: org.wso2.carbon.analytics/org.wso2.carbon.analytics.spark.core
private void writeDataFrameToDAL(DataFrame data) {
if (this.preserveOrder) {
logDebug("Inserting data with order preserved! Each partition will be written using separate jobs.");
for (int i = 0; i < data.rdd().partitions().length; i++) {
data.sqlContext().sparkContext().runJob(data.rdd(),
new AnalyticsWritingFunction(this.tenantId, this.tableName, data.schema(),
this.globalTenantAccess, this.schemaString, this.primaryKeys, this.mergeFlag,
this.recordStore, this.recordBatchSize), CarbonScalaUtils.getNumberSeq(i, i + 1),
false, ClassTag$.MODULE$.Unit());
}
} else {
data.foreachPartition(new AnalyticsWritingFunction(this.tenantId, this.tableName, data.schema(),
this.globalTenantAccess, this.schemaString, this.primaryKeys, this.mergeFlag,
this.recordStore, this.recordBatchSize));
}
}
代码示例来源:origin: org.qcri.rheem/rheem-iejoin
int cnt2 = partCount * rdd0.rdd().partitions().length;
内容来源于网络,如有侵权,请联系作者删除!