本文整理了Java中org.antlr.runtime.tree.Tree
类的一些代码示例,展示了Tree
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Tree
类的具体详情如下:
包路径:org.antlr.runtime.tree.Tree
类名称:Tree
[英]What does a tree look like? ANTLR has a number of support classes such as CommonTreeNodeStream that work on these kinds of trees. You don't have to make your trees implement this interface, but if you do, you'll be able to use more support code. NOTE: When constructing trees, ANTLR can build any kind of tree; it can even use Token objects as trees if you add a child list to your tokens. This is a tree node without any payload; just navigation and factory stuff.
[中]树是什么样子的?ANTLR有许多支持类,比如CommonTreeNodeStream,它们可以处理这些类型的树。你不必让你的树实现这个接口,但如果你做到了,你将能够使用更多的支持代码。注意:在构建树时,ANTLR可以构建任何类型的树;如果向令牌添加子列表,它甚至可以将令牌对象用作树。这是一个没有任何有效载荷的树节点;只是导航和工厂的东西。
代码示例来源:origin: apache/hive
String defaultPartitionName = HiveConf.getVar(conf, HiveConf.ConfVars.DEFAULTPARTITIONNAME);
Map<String, String> colTypes = new HashMap<String, String>();
for (FieldSchema fs : tab.getPartitionKeys()) {
colTypes.put(fs.getName().toLowerCase(), fs.getType());
for (int childIndex = 0; childIndex < ast.getChildCount(); childIndex++) {
Tree partSpecTree = ast.getChild(childIndex);
if (partSpecTree.getType() != HiveParser.TOK_PARTSPEC) {
continue;
HashSet<String> names = new HashSet<String>(partSpecTree.getChildCount());
for (int i = 0; i < partSpecTree.getChildCount(); ++i) {
CommonTree partSpecSingleKey = (CommonTree) partSpecTree.getChild(i);
assert (partSpecSingleKey.getType() == HiveParser.TOK_PARTVAL);
String key = stripIdentifierQuotes(partSpecSingleKey.getChild(0).getText()).toLowerCase();
String operator = partSpecSingleKey.getChild(1).getText();
ASTNode partValNode = (ASTNode)partSpecSingleKey.getChild(2);
TypeCheckCtx typeCheckCtx = new TypeCheckCtx(null);
names.add(key);
代码示例来源:origin: apache/hive
public static SubQueryType get(ASTNode opNode) throws SemanticException {
if(opNode == null) {
return SCALAR;
}
switch(opNode.getType()) {
// opNode's type is always either KW_EXISTS or KW_IN never NOTEXISTS or NOTIN
// to figure this out we need to check it's grand parent's parent
case HiveParser.KW_EXISTS:
if(opNode.getParent().getParent().getParent() != null
&& opNode.getParent().getParent().getParent().getType() == HiveParser.KW_NOT) {
return NOT_EXISTS;
}
return EXISTS;
case HiveParser.TOK_SUBQUERY_OP_NOTEXISTS:
return NOT_EXISTS;
case HiveParser.KW_IN:
if(opNode.getParent().getParent().getParent() != null
&& opNode.getParent().getParent().getParent().getType() == HiveParser.KW_NOT) {
return NOT_IN;
}
return IN;
case HiveParser.TOK_SUBQUERY_OP_NOTIN:
return NOT_IN;
default:
throw new SemanticException(SemanticAnalyzer.generateErrorMessage(opNode,
"Operator not supported in SubQuery use."));
}
}
}
代码示例来源:origin: apache/nifi
private String getSelectedName(final Tree selectable) {
if (selectable.getChildCount() == 0) {
return selectable.getText();
} else if (selectable.getType() == DOT) {
return getSelectedName(selectable.getChild(0)) + "." + getSelectedName(selectable.getChild(1));
} else {
return selectable.getChild(selectable.getChildCount() - 1).getText();
}
}
代码示例来源:origin: apache/hive
private String poolPath(Tree ast) {
StringBuilder builder = new StringBuilder();
builder.append(unescapeIdentifier(ast.getText()));
for (int i = 0; i < ast.getChildCount(); ++i) {
// DOT is not affected
builder.append(unescapeIdentifier(ast.getChild(i).getText()));
}
return builder.toString();
}
代码示例来源:origin: apache/nifi
public static RecordPathSegment compile(final Tree pathTree, final RecordPathSegment root, final boolean absolute) {
if (pathTree.getType() == FUNCTION) {
return buildPath(pathTree, null, absolute);
}
RecordPathSegment parent = root;
for (int i = 0; i < pathTree.getChildCount(); i++) {
final Tree child = pathTree.getChild(i);
parent = RecordPathCompiler.buildPath(child, parent, absolute);
}
return parent;
}
代码示例来源:origin: apache/hive
if (child.getToken().getType() == HiveParser.TOK_INPUTFORMAT) {
if (child.getChildCount() != 2) {
throw new SemanticException("FileFormat should contain both input format and Serde");
inputFormatClassName = stripQuotes(child.getChild(0).getText());
serDeClassName = stripQuotes(child.getChild(1).getText());
inputInfo = true;
} catch (Exception e) {
if (ast.getChild(2).getText().toLowerCase().equals("local")) {
isLocal = true;
} else {
String fromPath = stripQuotes(fromTree.getText());
fromURI = initializeFromURI(fromPath, isLocal);
} catch (IOException | URISyntaxException e) {
if (ts.tableHandle.isView() || ts.tableHandle.isMaterializedView()) {
throw new SemanticException(ErrorMsg.DML_AGAINST_VIEW.getMsg());
if (ts.tableHandle.isNonNative()) {
throw new SemanticException(ErrorMsg.LOAD_INTO_NON_NATIVE.getMsg());
&& conf.getBoolVar(HiveConf.ConfVars.HIVECHECKFILEFORMAT)) {
ensureFileFormatsMatch(ts, files, fromURI);
inputs.add(toReadEntity(new Path(fromURI)));
代码示例来源:origin: apache/hive
boolean testMode = conf.getBoolVar(HiveConf.ConfVars.HIVETESTMODE);
if (testMode) {
tableName = conf.getVar(HiveConf.ConfVars.HIVETESTMODEPREFIX)
+ tableName;
ASTNode partspec_val = (ASTNode) partspec.getChild(i);
String val = null;
String colName = unescapeIdentifier(partspec_val.getChild(0).getText().toLowerCase());
if (partspec_val.getChildCount() < 2) { // DP in the form of T partition (ds, hr)
if (allowDynamicPartitionsSpec) {
val = stripQuotes(partspec_val.getChild(1).getText());
List<FieldSchema> parts = tableHandle.getPartitionKeys();
partSpec = new LinkedHashMap<String, String>(partspec.getChildCount());
for (FieldSchema fs : parts) {
String partKey = fs.getName();
int numStaPart = parts.size() - numDynParts;
if (numStaPart == 0 &&
conf.getVar(HiveConf.ConfVars.DYNAMICPARTITIONINGMODE).equalsIgnoreCase("strict")) {
throw new SemanticException(ErrorMsg.DYNAMIC_PARTITION_STRICT_MODE.getMsg());
List<FieldSchema> parts = tableHandle.getPartitionKeys();
partSpec = new LinkedHashMap<String, String>(parts.size());
for (FieldSchema fs : parts) {
String partKey = fs.getName();
代码示例来源:origin: apache/hive
public static HashMap<String, String> getPartSpec(ASTNode partspec)
throws SemanticException {
if (partspec == null) {
return null;
}
HashMap<String, String> partSpec = new LinkedHashMap<String, String>();
for (int i = 0; i < partspec.getChildCount(); ++i) {
ASTNode partspec_val = (ASTNode) partspec.getChild(i);
String key = partspec_val.getChild(0).getText();
String val = null;
if (partspec_val.getChildCount() > 1) {
val = stripQuotes(partspec_val.getChild(1).getText());
}
partSpec.put(key.toLowerCase(), val);
}
return partSpec;
}
代码示例来源:origin: apache/hive
private void analyzeSwitchDatabase(ASTNode ast) throws SemanticException {
String dbName = unescapeIdentifier(ast.getChild(0).getText());
Database database = getDatabase(dbName, true);
ReadEntity dbReadEntity = new ReadEntity(database);
dbReadEntity.noLockNeeded();
inputs.add(dbReadEntity);
SwitchDatabaseDesc switchDatabaseDesc = new SwitchDatabaseDesc(dbName);
rootTasks.add(TaskFactory.get(new DDLWork(getInputs(), getOutputs(),
switchDatabaseDesc)));
}
代码示例来源:origin: apache/hive
private void analyzeAlterTableCompact(ASTNode ast, String tableName,
HashMap<String, String> partSpec) throws SemanticException {
String type = unescapeSQLString(ast.getChild(0).getText()).toLowerCase();
if (!type.equals("minor") && !type.equals("major")) {
throw new SemanticException(ErrorMsg.INVALID_COMPACTION_TYPE.getMsg());
}
LinkedHashMap<String, String> newPartSpec = null;
if (partSpec != null) {
newPartSpec = new LinkedHashMap<String, String>(partSpec);
}
HashMap<String, String> mapProp = null;
boolean isBlocking = false;
for(int i = 0; i < ast.getChildCount(); i++) {
switch(ast.getChild(i).getType()) {
case HiveParser.TOK_TABLEPROPERTIES:
mapProp = getProps((ASTNode) (ast.getChild(i)).getChild(0));
break;
case HiveParser.TOK_BLOCKING:
isBlocking = true;
break;
}
}
AlterTableSimpleDesc desc = new AlterTableSimpleDesc(
tableName, newPartSpec, type, isBlocking);
desc.setProps(mapProp);
rootTasks.add(TaskFactory.get(new DDLWork(getInputs(), getOutputs(), desc)));
}
代码示例来源:origin: apache/hive
boolean ifNotExists = ast.getChild(0).getType() == HiveParser.TOK_IFNOTEXISTS;
boolean isView = tab.isView();
validateAlterTableType(tab, AlterTableTypes.ADDPARTITION, expectView);
outputs.add(new WriteEntity(tab,
: WriteEntity.WriteType.DDL_SHARED));
int numCh = ast.getChildCount();
int start = ifNotExists ? 1 : 0;
throw new SemanticException("LOCATION clause illegal for view partition");
currentLocation = unescapeSQLString(child.getChild(0).getText());
inputs.add(toReadEntity(currentLocation));
break;
default:
if (this.conf.getBoolVar(HiveConf.ConfVars.HIVESTATSAUTOGATHER)) {
for (int index = 0; index < addPartitionDesc.getPartitionCount(); index++) {
OnePartitionDesc desc = addPartitionDesc.getPartition(index);
throw new SemanticException(ErrorMsg.NO_VALID_PARTN.getMsg());
inputs.addAll(driver.getPlan().getInputs());
代码示例来源:origin: apache/hive
Queue<Node> queue = new LinkedList<>();
queue.add(ast);
Map<HivePrivilegeObject, MaskAndFilterInfo> basicInfos = new LinkedHashMap<>();
while (!queue.isEmpty()) {
ASTNode astNode = (ASTNode) queue.poll();
if (astNode.getToken().getType() == HiveParser.TOK_TABREF) {
int aliasIndex = 0;
StringBuilder additionalTabInfo = new StringBuilder();
for (int index = 1; index < astNode.getChildCount(); index++) {
ASTNode ct = (ASTNode) astNode.getChild(index);
if (ct.getToken().getType() == HiveParser.TOK_TABLEBUCKETSAMPLE
|| ct.getToken().getType() == HiveParser.TOK_TABLESPLITSAMPLE
|| ct.getToken().getType() == HiveParser.TOK_TABLEPROPERTIES) {
additionalTabInfo.append(ctx.getTokenRewriteStream().toString(ct.getTokenStartIndex(),
alias = unescapeIdentifier(astNode.getChild(aliasIndex).getText());
} else {
alias = getUnescapedUnqualifiedTableName(tableTree);
if (table.isMaterializedView()) {
for (String qName : table.getCreationMetadata().getTablesUsed()) {
table = getTable(db, qName, tabNameToTabObject);
if (table == null) {
extractColumnInfos(table, colNames, new ArrayList<>());
basicInfos.put(new HivePrivilegeObject(table.getDbName(), table.getTableName(), colNames), null);
代码示例来源:origin: apache/hive
private void analyzeLockDatabase(ASTNode ast) throws SemanticException {
String dbName = unescapeIdentifier(ast.getChild(0).getText());
String mode = unescapeIdentifier(ast.getChild(1).getText().toUpperCase());
inputs.add(new ReadEntity(getDatabase(dbName)));
// Lock database operation is to acquire the lock explicitly, the operation
// itself doesn't need to be locked. Set the WriteEntity as WriteType:
// DDL_NO_LOCK here, otherwise it will conflict with Hive's transaction.
outputs.add(new WriteEntity(getDatabase(dbName), WriteType.DDL_NO_LOCK));
LockDatabaseDesc lockDatabaseDesc = new LockDatabaseDesc(dbName, mode,
HiveConf.getVar(conf, ConfVars.HIVEQUERYID));
lockDatabaseDesc.setQueryStr(ctx.getCmd());
DDLWork work = new DDLWork(getInputs(), getOutputs(), lockDatabaseDesc);
rootTasks.add(TaskFactory.get(work));
ctx.setNeedLockMgr(true);
}
代码示例来源:origin: apache/hive
if (ast.getChildCount() > 0) {
repair = ast.getChild(0).getType() == HiveParser.KW_REPAIR;
if (!repair) {
tableName = getUnescapedName((ASTNode) ast.getChild(0));
if (ast.getChildCount() > 1) {
addPartitions = isMsckAddPartition(ast.getChild(1).getType());
dropPartitions = isMsckDropPartition(ast.getChild(1).getType());
} else if (ast.getChildCount() > 1) {
if (ast.getChildCount() > 2) {
addPartitions = isMsckAddPartition(ast.getChild(2).getType());
dropPartitions = isMsckDropPartition(ast.getChild(2).getType());
List<Map<String, String>> specs = getPartitionSpecs(tab, ast);
if (repair && AcidUtils.isTransactionalTable(tab)) {
outputs.add(new WriteEntity(tab, WriteType.DDL_EXCLUSIVE));
} else {
outputs.add(new WriteEntity(tab, WriteEntity.WriteType.DDL_SHARED));
代码示例来源:origin: apache/hive
inputs.add(new ReadEntity(tab));
String name = colAst.getChild(0).getText().toLowerCase();
newCol.setName(unescapeIdentifier(name));
newCol.setComment(unescapeSQLString(colAst.getChild(2).getText()));
for( FieldSchema col : tab.getTTable().getPartitionKeys()) {
if (col.getName().compareTo(newCol.getName()) == 0) {
fFoundColumn = true;
代码示例来源:origin: apache/hive
private void analyzeAlterTableBucketNum(ASTNode ast, String tblName,
HashMap<String, String> partSpec) throws SemanticException {
Table tab = getTable(tblName, true);
if (tab.getBucketCols() == null || tab.getBucketCols().isEmpty()) {
throw new SemanticException(ErrorMsg.ALTER_BUCKETNUM_NONBUCKETIZED_TBL.getMsg());
}
validateAlterTableType(tab, AlterTableTypes.ALTERBUCKETNUM);
inputs.add(new ReadEntity(tab));
int bucketNum = Integer.parseInt(ast.getChild(0).getText());
AlterTableDesc alterBucketNum = new AlterTableDesc(tblName, partSpec, bucketNum);
rootTasks.add(TaskFactory.get(new DDLWork(getInputs(), getOutputs(),
alterBucketNum)));
}
代码示例来源:origin: apache/hive
continue;
break;
case HiveParser.TOK_TABLECOMMENT:
comment = unescapeSQLString(child.getChild(0).getText());
break;
case HiveParser.TOK_TABLEPROPERTIES:
break;
case HiveParser.TOK_TABLELOCATION:
location = unescapeSQLString(child.getChild(0).getText());
location = EximUtil.relativeToAbsolutePath(conf, location);
inputs.add(toReadEntity(location));
break;
case HiveParser.TOK_TABLESERIALIZER:
child = (ASTNode) child.getChild(0);
storageFormat.setSerde(unescapeSQLString(child.getChild(0).getText()));
if (child.getChildCount() == 2) {
readProps((ASTNode) (child.getChild(1).getChild(0)),
storageFormat.getSerdeProps());
if (null != db.getTable(dumpTable.getDbName(), dumpTable.getTableName(), false) && !ctx.isExplainSkipExecution()) {
throw new SemanticException(ErrorMsg.TABLE_ALREADY_EXISTS.getMsg(dbDotTable));
if (ast.getToken().getType() == HiveParser.TOK_ALTERVIEW &&
ast.getChild(1).getType() == HiveParser.TOK_QUERY) {
isAlterViewAs = true;
orReplace = true;
代码示例来源:origin: apache/hive
if (selExprList.getToken().getType() == HiveParser.TOK_SELECTDI
&& selExprList.getChildCount() == 1 && selExprList.getChild(0).getChildCount() == 1) {
ASTNode node = (ASTNode) selExprList.getChild(0).getChild(0);
if (node.getToken().getType() == HiveParser.TOK_ALLCOLREF) {
RowResolver rr = this.relToHiveRR.get(srcRel);
qbp.setSelExprForClause(detsClauseName, SemanticAnalyzer.genSelectDIAST(rr));
if (selExprList.getToken().getType() == HiveParser.TOK_SELECTDI &&
!qb.getAllWindowingSpecs().isEmpty()) {
return null;
if (conf.getBoolVar(HiveConf.ConfVars.HIVEGROUPBYSKEW)
&& qbp.getDistinctFuncExprsForClause(detsClauseName).size() > 1) {
throw new SemanticException(ErrorMsg.UNSUPPORTED_MULTIPLE_DISTINCTS.getMsg());
if (!HiveConf.getBoolVar(conf, HiveConf.ConfVars.HIVEMAPSIDEAGGREGATE)) {
throw new SemanticException(ErrorMsg.HIVE_GROUPING_SETS_AGGR_NOMAPAGGR.getMsg());
RowResolver groupByInputRowResolver = this.relToHiveRR.get(srcRel);
RowResolver groupByOutputRowResolver = new RowResolver();
groupByOutputRowResolver.setIsExprResolver(true);
String aggName = SemanticAnalyzer.unescapeIdentifier(value.getChild(0).getText());
boolean isDistinct = value.getType() == HiveParser.TOK_FUNCTIONDI;
boolean isAllColumns = value.getType() == HiveParser.TOK_FUNCTIONSTAR;
代码示例来源:origin: apache/hive
private void analyzeAlterMaterializedViewRewrite(String fqMvName, ASTNode ast) throws SemanticException {
switch (ast.getChild(0).getType()) {
case HiveParser.TOK_REWRITE_ENABLED:
enableFlag = true;
for (String tableName : materializedViewTable.getCreationMetadata().getTablesUsed()) {
Table table = getTable(tableName, true);
if (!AcidUtils.isTransactionalTable(table)) {
inputs.add(new ReadEntity(materializedViewTable));
outputs.add(new WriteEntity(materializedViewTable, WriteEntity.WriteType.DDL_EXCLUSIVE));
rootTasks.add(TaskFactory.get(new DDLWork(getInputs(), getOutputs(),
alterMVDesc)));
代码示例来源:origin: apache/hive
private void parsePartitionSpec(ASTNode tableNode, LinkedHashMap<String, String> partSpec) throws SemanticException {
// get partition metadata if partition specified
if (tableNode.getChildCount() == 2) {
ASTNode partspec = (ASTNode) tableNode.getChild(1);
// partSpec is a mapping from partition column name to its value.
for (int j = 0; j < partspec.getChildCount(); ++j) {
ASTNode partspec_val = (ASTNode) partspec.getChild(j);
String val = null;
String colName = unescapeIdentifier(partspec_val.getChild(0)
.getText().toLowerCase());
if (partspec_val.getChildCount() < 2) { // DP in the form of T
// partition (ds, hr)
throw new SemanticException(
ErrorMsg.INVALID_PARTITION
.getMsg(" - Dynamic partitions not allowed"));
} else { // in the form of T partition (ds="2010-03-03")
val = stripQuotes(partspec_val.getChild(1).getText());
}
partSpec.put(colName, val);
}
}
}
内容来源于网络,如有侵权,请联系作者删除!