本文整理了Java中org.apache.calcite.tools.Frameworks.withPlanner()
方法的一些代码示例,展示了Frameworks.withPlanner()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Frameworks.withPlanner()
方法的具体详情如下:
包路径:org.apache.calcite.tools.Frameworks
类名称:Frameworks
方法名:withPlanner
[英]Initializes a container then calls user-specified code with a planner.
[中]初始化容器,然后使用计划器调用用户指定的代码。
代码示例来源:origin: apache/drill
/**
* Get optimized logical plan for the given QB tree in the semAnalyzer.
*
* @return
* @throws SemanticException
*/
RelNode logicalPlan() throws SemanticException {
RelNode optimizedOptiqPlan = null;
CalcitePlannerAction calcitePlannerAction = null;
if (this.columnAccessInfo == null) {
this.columnAccessInfo = new ColumnAccessInfo();
}
calcitePlannerAction = new CalcitePlannerAction(prunedPartitions, this.columnAccessInfo);
try {
optimizedOptiqPlan = Frameworks.withPlanner(calcitePlannerAction, Frameworks
.newConfigBuilder().typeSystem(new HiveTypeSystemImpl()).build());
} catch (Exception e) {
rethrowCalciteException(e);
throw new AssertionError("rethrowCalciteException didn't throw for " + e.getMessage());
}
return optimizedOptiqPlan;
}
代码示例来源:origin: apache/hive
/**
* Get optimized logical plan for the given QB tree in the semAnalyzer.
*
* @return
* @throws SemanticException
*/
RelNode logicalPlan() throws SemanticException {
RelNode optimizedOptiqPlan = null;
CalcitePlannerAction calcitePlannerAction = null;
if (this.columnAccessInfo == null) {
this.columnAccessInfo = new ColumnAccessInfo();
}
calcitePlannerAction = new CalcitePlannerAction(
prunedPartitions,
ctx.getOpContext().getColStatsCache(),
this.columnAccessInfo);
try {
optimizedOptiqPlan = Frameworks.withPlanner(calcitePlannerAction, Frameworks
.newConfigBuilder().typeSystem(new HiveTypeSystemImpl()).build());
} catch (Exception e) {
rethrowCalciteException(e);
throw new AssertionError("rethrowCalciteException didn't throw for " + e.getMessage());
}
return optimizedOptiqPlan;
}
代码示例来源:origin: apache/hive
optimizedOptiqPlan = Frameworks.withPlanner(calcitePlannerAction, Frameworks
.newConfigBuilder().typeSystem(new HiveTypeSystemImpl()).build());
} catch (Exception e) {
代码示例来源:origin: apache/drill
optimizedOptiqPlan = Frameworks.withPlanner(calcitePlannerAction, Frameworks
.newConfigBuilder().typeSystem(new HiveTypeSystemImpl()).build());
} catch (Exception e) {
代码示例来源:origin: Qihoo360/Quicksql
/**
* Initializes a container then calls user-specified code with a planner.
*
* @param action Callback containing user-specified code
* @return Return value from action
*/
public static <R> R withPlanner(final PlannerAction<R> action) {
FrameworkConfig config = newConfigBuilder() //
.defaultSchema(Frameworks.createRootSchema(true)).build();
return withPlanner(action, config);
}
代码示例来源:origin: org.apache.calcite/calcite-core
/**
* Initializes a container then calls user-specified code with a planner.
*
* @param action Callback containing user-specified code
* @return Return value from action
*/
public static <R> R withPlanner(final PlannerAction<R> action) {
FrameworkConfig config = newConfigBuilder() //
.defaultSchema(Frameworks.createRootSchema(true)).build();
return withPlanner(action, config);
}
代码示例来源:origin: com.facebook.presto.hive/hive-apache
/**
* Get Optimized AST for the given QB tree in the semAnalyzer.
*
* @return Optimized operator tree translated in to Hive AST
* @throws SemanticException
*/
ASTNode getOptimizedAST() throws SemanticException {
ASTNode optiqOptimizedAST = null;
RelNode optimizedOptiqPlan = null;
CalcitePlannerAction calcitePlannerAction = new CalcitePlannerAction(prunedPartitions);
try {
optimizedOptiqPlan = Frameworks.withPlanner(calcitePlannerAction, Frameworks
.newConfigBuilder().typeSystem(new HiveTypeSystemImpl()).build());
} catch (Exception e) {
rethrowCalciteException(e);
throw new AssertionError("rethrowCalciteException didn't throw for " + e.getMessage());
}
optiqOptimizedAST = ASTConverter.convert(optimizedOptiqPlan, topLevelFieldSchema);
return optiqOptimizedAST;
}
代码示例来源:origin: com.facebook.presto.hive/hive-apache
/**
* Get Optimized Hive Operator DAG for the given QB tree in the semAnalyzer.
*
* @return Optimized Hive operator tree
* @throws SemanticException
*/
Operator getOptimizedHiveOPDag() throws SemanticException {
RelNode optimizedOptiqPlan = null;
CalcitePlannerAction calcitePlannerAction = new CalcitePlannerAction(prunedPartitions);
try {
optimizedOptiqPlan = Frameworks.withPlanner(calcitePlannerAction, Frameworks
.newConfigBuilder().typeSystem(new HiveTypeSystemImpl()).build());
} catch (Exception e) {
rethrowCalciteException(e);
throw new AssertionError("rethrowCalciteException didn't throw for " + e.getMessage());
}
RelNode modifiedOptimizedOptiqPlan = PlanModifierForReturnPath.convertOpTree(
introduceProjectIfNeeded(optimizedOptiqPlan), topLevelFieldSchema);
LOG.debug("Translating the following plan:\n" + RelOptUtil.toString(modifiedOptimizedOptiqPlan));
Operator<?> hiveRoot = new HiveOpConverter(this, conf, unparseTranslator, topOps,
conf.getVar(HiveConf.ConfVars.HIVEMAPREDMODE).equalsIgnoreCase("strict")).convert(modifiedOptimizedOptiqPlan);
RowResolver hiveRootRR = genRowResolver(hiveRoot, getQB());
opParseCtx.put(hiveRoot, new OpParseContext(hiveRootRR));
return genFileSinkPlan(getQB().getParseInfo().getClauseNames().iterator().next(), getQB(), hiveRoot);
}
代码示例来源:origin: org.apache.calcite/calcite-core
private void ready() {
switch (state) {
case STATE_0_CLOSED:
reset();
}
ensure(State.STATE_1_RESET);
Frameworks.withPlanner(
(cluster, relOptSchema, rootSchema) -> {
Util.discard(rootSchema); // use our own defaultSchema
typeFactory = (JavaTypeFactory) cluster.getTypeFactory();
planner = cluster.getPlanner();
planner.setExecutor(executor);
return null;
},
config);
state = State.STATE_2_READY;
// If user specify own traitDef, instead of default default trait,
// first, clear the default trait def registered with planner
// then, register the trait def specified in traitDefs.
if (this.traitDefs != null) {
planner.clearRelTraitDefs();
for (RelTraitDef def : this.traitDefs) {
planner.addRelTraitDef(def);
}
}
}
代码示例来源:origin: Qihoo360/Quicksql
private void ready() {
switch (state) {
case STATE_0_CLOSED:
reset();
}
ensure(State.STATE_1_RESET);
Frameworks.withPlanner(
(cluster, relOptSchema, rootSchema) -> {
Util.discard(rootSchema); // use our own defaultSchema
typeFactory = (JavaTypeFactory) cluster.getTypeFactory();
planner = cluster.getPlanner();
planner.setExecutor(executor);
return null;
},
config);
state = State.STATE_2_READY;
// If user specify own traitDef, instead of default default trait,
// first, clear the default trait def registered with planner
// then, register the trait def specified in traitDefs.
if (this.traitDefs != null) {
planner.clearRelTraitDefs();
for (RelTraitDef def : this.traitDefs) {
planner.addRelTraitDef(def);
}
}
}
代码示例来源:origin: Qihoo360/Quicksql
@Test public void testAllPredicates() {
final Project rel = (Project) convertSql("select * from emp, dept");
final Join join = (Join) rel.getInput();
final RelOptTable empTable = join.getInput(0).getTable();
final RelOptTable deptTable = join.getInput(1).getTable();
Frameworks.withPlanner((cluster, relOptSchema, rootSchema) -> {
checkAllPredicates(cluster, empTable, deptTable);
return null;
});
}
代码示例来源:origin: Qihoo360/Quicksql
/** Unit test for
* {@link org.apache.calcite.rel.metadata.RelMdPredicates#getPredicates(Join, RelMetadataQuery)}. */
@Test public void testPredicates() {
final Project rel = (Project) convertSql("select * from emp, dept");
final Join join = (Join) rel.getInput();
final RelOptTable empTable = join.getInput(0).getTable();
final RelOptTable deptTable = join.getInput(1).getTable();
Frameworks.withPlanner((cluster, relOptSchema, rootSchema) -> {
checkPredicates(cluster, empTable, deptTable);
return null;
});
}
代码示例来源:origin: org.apache.calcite/calcite-core
/** Unit test for
* {@link org.apache.calcite.rel.metadata.RelMdCollation#project}
* and other helper functions for deducing collations. */
@Test public void testCollation() {
final Project rel = (Project) convertSql("select * from emp, dept");
final Join join = (Join) rel.getInput();
final RelOptTable empTable = join.getInput(0).getTable();
final RelOptTable deptTable = join.getInput(1).getTable();
Frameworks.withPlanner((cluster, relOptSchema, rootSchema) -> {
checkCollation(cluster, empTable, deptTable);
return null;
});
}
代码示例来源:origin: org.apache.calcite/calcite-core
@Test public void testAllPredicates() {
final Project rel = (Project) convertSql("select * from emp, dept");
final Join join = (Join) rel.getInput();
final RelOptTable empTable = join.getInput(0).getTable();
final RelOptTable deptTable = join.getInput(1).getTable();
Frameworks.withPlanner((cluster, relOptSchema, rootSchema) -> {
checkAllPredicates(cluster, empTable, deptTable);
return null;
});
}
代码示例来源:origin: org.apache.calcite/calcite-core
/** Unit test for
* {@link org.apache.calcite.rel.metadata.RelMdPredicates#getPredicates(Join, RelMetadataQuery)}. */
@Test public void testPredicates() {
final Project rel = (Project) convertSql("select * from emp, dept");
final Join join = (Join) rel.getInput();
final RelOptTable empTable = join.getInput(0).getTable();
final RelOptTable deptTable = join.getInput(1).getTable();
Frameworks.withPlanner((cluster, relOptSchema, rootSchema) -> {
checkPredicates(cluster, empTable, deptTable);
return null;
});
}
代码示例来源:origin: Qihoo360/Quicksql
/** Unit test for
* {@link org.apache.calcite.rel.metadata.RelMdCollation#project}
* and other helper functions for deducing collations. */
@Test public void testCollation() {
final Project rel = (Project) convertSql("select * from emp, dept");
final Join join = (Join) rel.getInput();
final RelOptTable empTable = join.getInput(0).getTable();
final RelOptTable deptTable = join.getInput(1).getTable();
Frameworks.withPlanner((cluster, relOptSchema, rootSchema) -> {
checkCollation(cluster, empTable, deptTable);
return null;
});
}
代码示例来源:origin: Qihoo360/Quicksql
/** Unit test for
* {@link org.apache.calcite.rel.metadata.RelMetadataQuery#getAverageColumnSizes(org.apache.calcite.rel.RelNode)},
* {@link org.apache.calcite.rel.metadata.RelMetadataQuery#getAverageRowSize(org.apache.calcite.rel.RelNode)}. */
@Test public void testAverageRowSize() {
final Project rel = (Project) convertSql("select * from emp, dept");
final Join join = (Join) rel.getInput();
final RelOptTable empTable = join.getInput(0).getTable();
final RelOptTable deptTable = join.getInput(1).getTable();
Frameworks.withPlanner((cluster, relOptSchema, rootSchema) -> {
checkAverageRowSize(cluster, empTable, deptTable);
return null;
});
}
代码示例来源:origin: org.apache.calcite/calcite-core
/** Unit test for
* {@link org.apache.calcite.rel.metadata.RelMetadataQuery#getAverageColumnSizes(org.apache.calcite.rel.RelNode)},
* {@link org.apache.calcite.rel.metadata.RelMetadataQuery#getAverageRowSize(org.apache.calcite.rel.RelNode)}. */
@Test public void testAverageRowSize() {
final Project rel = (Project) convertSql("select * from emp, dept");
final Join join = (Join) rel.getInput();
final RelOptTable empTable = join.getInput(0).getTable();
final RelOptTable deptTable = join.getInput(1).getTable();
Frameworks.withPlanner((cluster, relOptSchema, rootSchema) -> {
checkAverageRowSize(cluster, empTable, deptTable);
return null;
});
}
代码示例来源:origin: Qihoo360/Quicksql
/**
* Unit test for {@link org.apache.calcite.rel.externalize.RelJsonReader}.
*/
@Test public void testReader() {
String s =
Frameworks.withPlanner((cluster, relOptSchema, rootSchema) -> {
SchemaPlus schema =
rootSchema.add("hr",
new ReflectiveSchema(new JdbcTest.HrSchema()));
final RelJsonReader reader =
new RelJsonReader(cluster, relOptSchema, schema);
RelNode node;
try {
node = reader.read(XX);
} catch (IOException e) {
throw new RuntimeException(e);
}
return RelOptUtil.dumpPlan("", node, SqlExplainFormat.TEXT,
SqlExplainLevel.EXPPLAN_ATTRIBUTES);
});
assertThat(s,
isLinux("LogicalAggregate(group=[{0}], agg#0=[COUNT(DISTINCT $1)], agg#1=[COUNT()])\n"
+ " LogicalFilter(condition=[=($1, 10)])\n"
+ " LogicalTableScan(table=[[hr, emps]])\n"));
}
}
代码示例来源:origin: org.apache.calcite/calcite-core
/**
* Unit test for {@link org.apache.calcite.rel.externalize.RelJsonReader}.
*/
@Test public void testReader() {
String s =
Frameworks.withPlanner((cluster, relOptSchema, rootSchema) -> {
SchemaPlus schema =
rootSchema.add("hr",
new ReflectiveSchema(new JdbcTest.HrSchema()));
final RelJsonReader reader =
new RelJsonReader(cluster, relOptSchema, schema);
RelNode node;
try {
node = reader.read(XX);
} catch (IOException e) {
throw new RuntimeException(e);
}
return RelOptUtil.dumpPlan("", node, SqlExplainFormat.TEXT,
SqlExplainLevel.EXPPLAN_ATTRIBUTES);
});
assertThat(s,
isLinux("LogicalAggregate(group=[{0}], agg#0=[COUNT(DISTINCT $1)], agg#1=[COUNT()])\n"
+ " LogicalFilter(condition=[=($1, 10)])\n"
+ " LogicalTableScan(table=[[hr, emps]])\n"));
}
}
内容来源于网络,如有侵权,请联系作者删除!