org.apache.hadoop.hdfs.server.datanode.DataNode.printUsage()方法的使用及代码示例

x33g5p2x  于2022-01-18 转载在 其他  
字(5.1k)|赞(0)|评价(0)|浏览(94)

本文整理了Java中org.apache.hadoop.hdfs.server.datanode.DataNode.printUsage()方法的一些代码示例,展示了DataNode.printUsage()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。DataNode.printUsage()方法的具体详情如下:
包路径:org.apache.hadoop.hdfs.server.datanode.DataNode
类名称:DataNode
方法名:printUsage

DataNode.printUsage介绍

暂无

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

/** Instantiate a single datanode object, along with its secure resources. 
 * This must be run by invoking{@link DataNode#runDatanodeDaemon()} 
 * subsequently. 
 */
public static DataNode instantiateDataNode(String args [], Configuration conf,
  SecureResources resources) throws IOException {
 if (conf == null)
  conf = new HdfsConfiguration();
 
 if (args != null) {
  // parse generic hadoop options
  GenericOptionsParser hParser = new GenericOptionsParser(conf, args);
  args = hParser.getRemainingArgs();
 }
 
 if (!parseArguments(args, conf)) {
  printUsage(System.err);
  return null;
 }
 Collection<StorageLocation> dataLocations = getStorageLocations(conf);
 UserGroupInformation.setConfiguration(conf);
 SecurityUtil.login(conf, DFS_DATANODE_KEYTAB_FILE_KEY,
   DFS_DATANODE_KERBEROS_PRINCIPAL_KEY, getHostName(conf));
 return makeInstance(dataLocations, conf, resources);
}

代码示例来源:origin: com.facebook.hadoop/hadoop-core

/** Instantiate a single datanode object. This must be run by invoking
 *  {@link DataNode#runDatanodeDaemon(DataNode)} subsequently. 
 */
public static DataNode instantiateDataNode(String args[],
                  Configuration conf) throws IOException {
 if (conf == null)
  conf = new Configuration();
 if (!parseArguments(args, conf)) {
  printUsage();
  return null;
 }
 if (conf.get("dfs.network.script") != null) {
  LOG.error("This configuration for rack identification is not supported" +
    " anymore. RackID resolution is handled by the NameNode.");
  System.exit(-1);
 }
 String[] dataDirs = conf.getStrings("dfs.data.dir");
 dnThreadName = "DataNode: [" +
           StringUtils.arrayToString(dataDirs) + "]";
 return makeInstance(dataDirs, conf);
}

代码示例来源:origin: org.jvnet.hudson.hadoop/hadoop-core

/** Instantiate a single datanode object. This must be run by invoking
 *  {@link DataNode#runDatanodeDaemon(DataNode)} subsequently. 
 */
public static DataNode instantiateDataNode(String args[],
                  Configuration conf) throws IOException {
 if (conf == null)
  conf = new Configuration();
 if (!parseArguments(args, conf)) {
  printUsage();
  return null;
 }
 if (conf.get("dfs.network.script") != null) {
  LOG.error("This configuration for rack identification is not supported" +
    " anymore. RackID resolution is handled by the NameNode.");
  System.exit(-1);
 }
 String[] dataDirs = conf.getStrings("dfs.data.dir");
 dnThreadName = "DataNode: [" +
           StringUtils.arrayToString(dataDirs) + "]";
 return makeInstance(dataDirs, conf);
}

代码示例来源:origin: io.fabric8/fabric-hadoop

/** Instantiate a single datanode object. This must be run by invoking
 *  {@link DataNode#runDatanodeDaemon(DataNode)} subsequently. 
 * @param resources Secure resources needed to run under Kerberos
 */
public static DataNode instantiateDataNode(String args[],
                  Configuration conf, 
                  SecureResources resources) throws IOException {
 if (conf == null)
  conf = new Configuration();
 if (!parseArguments(args, conf)) {
  printUsage();
  System.exit(-2);
 }
 if (conf.get("dfs.network.script") != null) {
  LOG.error("This configuration for rack identification is not supported" +
    " anymore. RackID resolution is handled by the NameNode.");
  System.exit(-1);
 }
 String[] dataDirs = conf.getStrings(DATA_DIR_KEY);
 dnThreadName = "DataNode: [" +
           StringUtils.arrayToString(dataDirs) + "]";
 DefaultMetricsSystem.initialize("DataNode");
 return makeInstance(dataDirs, conf, resources);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

/** Instantiate a single datanode object, along with its secure resources. 
 * This must be run by invoking{@link DataNode#runDatanodeDaemon()} 
 * subsequently. 
 */
public static DataNode instantiateDataNode(String args [], Configuration conf,
  SecureResources resources) throws IOException {
 if (conf == null)
  conf = new HdfsConfiguration();
 
 if (args != null) {
  // parse generic hadoop options
  GenericOptionsParser hParser = new GenericOptionsParser(conf, args);
  args = hParser.getRemainingArgs();
 }
 
 if (!parseArguments(args, conf)) {
  printUsage(System.err);
  return null;
 }
 Collection<StorageLocation> dataLocations = getStorageLocations(conf);
 UserGroupInformation.setConfiguration(conf);
 SecurityUtil.login(conf, DFS_DATANODE_KEYTAB_FILE_KEY,
   DFS_DATANODE_KERBEROS_PRINCIPAL_KEY);
 return makeInstance(dataLocations, conf, resources);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

/** Instantiate a single datanode object, along with its secure resources. 
 * This must be run by invoking{@link DataNode#runDatanodeDaemon()} 
 * subsequently. 
 */
public static DataNode instantiateDataNode(String args [], Configuration conf,
  SecureResources resources) throws IOException {
 if (conf == null)
  conf = new HdfsConfiguration();
 
 if (args != null) {
  // parse generic hadoop options
  GenericOptionsParser hParser = new GenericOptionsParser(conf, args);
  args = hParser.getRemainingArgs();
 }
 
 if (!parseArguments(args, conf)) {
  printUsage(System.err);
  return null;
 }
 Collection<StorageLocation> dataLocations = getStorageLocations(conf);
 UserGroupInformation.setConfiguration(conf);
 SecurityUtil.login(conf, DFS_DATANODE_KEYTAB_FILE_KEY,
   DFS_DATANODE_KERBEROS_PRINCIPAL_KEY);
 return makeInstance(dataLocations, conf, resources);
}

相关文章

DataNode类方法