org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(10.8k)|赞(0)|评价(0)|浏览(161)

本文整理了Java中org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs()方法的一些代码示例,展示了Util.stringCollectionAsURIs()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Util.stringCollectionAsURIs()方法的具体详情如下:
包路径:org.apache.hadoop.hdfs.server.common.Util
类名称:Util
方法名:stringCollectionAsURIs

Util.stringCollectionAsURIs介绍

[英]Converts a collection of strings into a collection of URIs.
[中]将字符串集合转换为URI集合。

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

/**
 * Returns edit directories that are shared between primary and secondary.
 * @param conf configuration
 * @return collection of edit directories from {@code conf}
 */
public static List<URI> getSharedEditsDirs(Configuration conf) {
 // don't use getStorageDirs here, because we want an empty default
 // rather than the dir in /tmp
 Collection<String> dirNames = conf.getTrimmedStringCollection(
   DFS_NAMENODE_SHARED_EDITS_DIR_KEY);
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

static List<URI> getCheckpointEditsDirs(Configuration conf,
  String defaultName) {
 Collection<String> dirNames = conf.getTrimmedStringCollection(
   DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_EDITS_DIR_KEY);
 if (dirNames.size() == 0 && defaultName != null) {
  dirNames.add(defaultName);
 }
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

/**
 * Retrieve checkpoint dirs from configuration.
 *
 * @param conf the Configuration
 * @param defaultValue a default value for the attribute, if null
 * @return a Collection of URIs representing the values in 
 * dfs.namenode.checkpoint.dir configuration property
 */
static Collection<URI> getCheckpointDirs(Configuration conf,
  String defaultValue) {
 Collection<String> dirNames = conf.getTrimmedStringCollection(
   DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_DIR_KEY);
 if (dirNames.size() == 0 && defaultValue != null) {
  dirNames.add(defaultValue);
 }
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

private static Collection<URI> getStorageDirs(Configuration conf,
                       String propertyName) {
 Collection<String> dirNames = conf.getTrimmedStringCollection(propertyName);
 StartupOption startOpt = NameNode.getStartupOption(conf);
 if(startOpt == StartupOption.IMPORT) {
  // In case of IMPORT this will get rid of default directories 
  // but will retain directories specified in hdfs-site.xml
  // When importing image from a checkpoint, the name-node can
  // start with empty set of storage directories.
  Configuration cE = new HdfsConfiguration(false);
  cE.addResource("core-default.xml");
  cE.addResource("core-site.xml");
  cE.addResource("hdfs-default.xml");
  Collection<String> dirNames2 = cE.getTrimmedStringCollection(propertyName);
  dirNames.removeAll(dirNames2);
  if(dirNames.isEmpty())
   LOG.warn("!!! WARNING !!!" +
    "\n\tThe NameNode currently runs without persistent storage." +
    "\n\tAny changes to the file system meta-data may be lost." +
    "\n\tRecommended actions:" +
    "\n\t\t- shutdown and restart NameNode with configured \"" 
    + propertyName + "\" in hdfs-site.xml;" +
    "\n\t\t- use Backup Node as a persistent and up-to-date storage " +
    "of the file system meta-data.");
 } else if (dirNames.isEmpty()) {
  dirNames = Collections.singletonList(
    DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_DEFAULT);
 }
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

DFSConfigKeys.DFS_NAMENODE_DU_RESERVED_DEFAULT);
Collection<URI> extraCheckedVolumes = Util.stringCollectionAsURIs(conf
  .getTrimmedStringCollection(DFSConfigKeys.DFS_NAMENODE_CHECKED_VOLUMES_KEY));

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

/**
 * Returns edit directories that are shared between primary and secondary.
 * @param conf configuration
 * @return collection of edit directories from {@code conf}
 */
public static List<URI> getSharedEditsDirs(Configuration conf) {
 // don't use getStorageDirs here, because we want an empty default
 // rather than the dir in /tmp
 Collection<String> dirNames = conf.getTrimmedStringCollection(
   DFS_NAMENODE_SHARED_EDITS_DIR_KEY);
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

/**
 * Returns edit directories that are shared between primary and secondary.
 * @param conf configuration
 * @return collection of edit directories from {@code conf}
 */
public static List<URI> getSharedEditsDirs(Configuration conf) {
 // don't use getStorageDirs here, because we want an empty default
 // rather than the dir in /tmp
 Collection<String> dirNames = conf.getTrimmedStringCollection(
   DFS_NAMENODE_SHARED_EDITS_DIR_KEY);
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

static List<URI> getCheckpointEditsDirs(Configuration conf,
  String defaultName) {
 Collection<String> dirNames = conf.getTrimmedStringCollection(
   DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_EDITS_DIR_KEY);
 if (dirNames.size() == 0 && defaultName != null) {
  dirNames.add(defaultName);
 }
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: com.facebook.hadoop/hadoop-core

static Collection<URI> getStorageDirs(Configuration conf) {
 Collection<String> dirNames =
  conf.getStringCollection("dfs.data.dir");
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

static List<URI> getCheckpointEditsDirs(Configuration conf,
  String defaultName) {
 Collection<String> dirNames = conf.getTrimmedStringCollection(
   DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_EDITS_DIR_KEY);
 if (dirNames.size() == 0 && defaultName != null) {
  dirNames.add(defaultName);
 }
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

/**
 * Retrieve checkpoint dirs from configuration.
 *
 * @param conf the Configuration
 * @param defaultValue a default value for the attribute, if null
 * @return a Collection of URIs representing the values in 
 * dfs.namenode.checkpoint.dir configuration property
 */
static Collection<URI> getCheckpointDirs(Configuration conf,
  String defaultValue) {
 Collection<String> dirNames = conf.getTrimmedStringCollection(
   DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_DIR_KEY);
 if (dirNames.size() == 0 && defaultValue != null) {
  dirNames.add(defaultValue);
 }
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

/**
 * Retrieve checkpoint dirs from configuration.
 *
 * @param conf the Configuration
 * @param defaultValue a default value for the attribute, if null
 * @return a Collection of URIs representing the values in 
 * dfs.namenode.checkpoint.dir configuration property
 */
static Collection<URI> getCheckpointDirs(Configuration conf,
  String defaultValue) {
 Collection<String> dirNames = conf.getTrimmedStringCollection(
   DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_DIR_KEY);
 if (dirNames.size() == 0 && defaultValue != null) {
  dirNames.add(defaultValue);
 }
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: org.apache.tajo/tajo-storage

public static List<URI> getStorageDirs(){
 Configuration conf = new HdfsConfiguration();
 Collection<String> dirNames = conf.getTrimmedStringCollection(DFS_DATANODE_DATA_DIR_KEY);
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: apache/tajo

public static List<URI> getDataNodeStorageDirs(){
 Configuration conf = new HdfsConfiguration();
 Collection<String> dirNames = conf.getTrimmedStringCollection(DFS_DATANODE_DATA_DIR_KEY);
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: org.apache.tajo/tajo-storage-common

public static List<URI> getDataNodeStorageDirs(){
 Configuration conf = new HdfsConfiguration();
 Collection<String> dirNames = conf.getTrimmedStringCollection(DFS_DATANODE_DATA_DIR_KEY);
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

private static Collection<URI> getStorageDirs(Configuration conf,
                       String propertyName) {
 Collection<String> dirNames = conf.getTrimmedStringCollection(propertyName);
 StartupOption startOpt = NameNode.getStartupOption(conf);
 if(startOpt == StartupOption.IMPORT) {
  // In case of IMPORT this will get rid of default directories 
  // but will retain directories specified in hdfs-site.xml
  // When importing image from a checkpoint, the name-node can
  // start with empty set of storage directories.
  Configuration cE = new HdfsConfiguration(false);
  cE.addResource("core-default.xml");
  cE.addResource("core-site.xml");
  cE.addResource("hdfs-default.xml");
  Collection<String> dirNames2 = cE.getTrimmedStringCollection(propertyName);
  dirNames.removeAll(dirNames2);
  if(dirNames.isEmpty())
   LOG.warn("!!! WARNING !!!" +
    "\n\tThe NameNode currently runs without persistent storage." +
    "\n\tAny changes to the file system meta-data may be lost." +
    "\n\tRecommended actions:" +
    "\n\t\t- shutdown and restart NameNode with configured \"" 
    + propertyName + "\" in hdfs-site.xml;" +
    "\n\t\t- use Backup Node as a persistent and up-to-date storage " +
    "of the file system meta-data.");
 } else if (dirNames.isEmpty()) {
  dirNames = Collections.singletonList(
    DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_DEFAULT);
 }
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

private static Collection<URI> getStorageDirs(Configuration conf,
                       String propertyName) {
 Collection<String> dirNames = conf.getTrimmedStringCollection(propertyName);
 StartupOption startOpt = NameNode.getStartupOption(conf);
 if(startOpt == StartupOption.IMPORT) {
  // In case of IMPORT this will get rid of default directories 
  // but will retain directories specified in hdfs-site.xml
  // When importing image from a checkpoint, the name-node can
  // start with empty set of storage directories.
  Configuration cE = new HdfsConfiguration(false);
  cE.addResource("core-default.xml");
  cE.addResource("core-site.xml");
  cE.addResource("hdfs-default.xml");
  Collection<String> dirNames2 = cE.getTrimmedStringCollection(propertyName);
  dirNames.removeAll(dirNames2);
  if(dirNames.isEmpty())
   LOG.warn("!!! WARNING !!!" +
    "\n\tThe NameNode currently runs without persistent storage." +
    "\n\tAny changes to the file system meta-data may be lost." +
    "\n\tRecommended actions:" +
    "\n\t\t- shutdown and restart NameNode with configured \"" 
    + propertyName + "\" in hdfs-site.xml;" +
    "\n\t\t- use Backup Node as a persistent and up-to-date storage " +
    "of the file system meta-data.");
 } else if (dirNames.isEmpty()) {
  dirNames = Collections.singletonList(
    DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_DEFAULT);
 }
 return Util.stringCollectionAsURIs(dirNames);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

DFSConfigKeys.DFS_NAMENODE_DU_RESERVED_DEFAULT);
Collection<URI> extraCheckedVolumes = Util.stringCollectionAsURIs(conf
  .getTrimmedStringCollection(DFSConfigKeys.DFS_NAMENODE_CHECKED_VOLUMES_KEY));

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

DFSConfigKeys.DFS_NAMENODE_DU_RESERVED_DEFAULT);
Collection<URI> extraCheckedVolumes = Util.stringCollectionAsURIs(conf
  .getTrimmedStringCollection(DFSConfigKeys.DFS_NAMENODE_CHECKED_VOLUMES_KEY));

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

Collection<URI> checkpointDirs = Util.stringCollectionAsURIs(conf
  .getTrimmedStringCollection(DFS_NAMENODE_CHECKPOINT_DIR_KEY));
for (URI checkpointDirUri : checkpointDirs) {

相关文章