org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode()方法的使用及代码示例

x33g5p2x  于2022-01-18 转载在 其他  
字(2.9k)|赞(0)|评价(0)|浏览(122)

本文整理了Java中org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode()方法的一些代码示例,展示了DataNode.startDataNode()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。DataNode.startDataNode()方法的具体详情如下:
包路径:org.apache.hadoop.hdfs.server.datanode.DataNode
类名称:DataNode
方法名:startDataNode

DataNode.startDataNode介绍

[英]This method starts the data node with the specified conf. If conf's CONFIG_PROPERTY_SIMULATED property is set then a simulated storage based data node is created.
[中]此方法使用指定的conf启动数据节点。如果设置了conf的CONFIG_属性\u SIMULATED属性,则会创建一个基于模拟存储的数据节点。

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

hostName = getHostName(conf);
 LOG.info("Configured hostname is {}", hostName);
 startDataNode(dataDirs, resources);
} catch (IOException ie) {
 shutdown();

代码示例来源:origin: org.jvnet.hudson.hadoop/hadoop-core

/**
 * Create the DataNode given a configuration and an array of dataDirs.
 * 'dataDirs' is where the blocks are stored.
 */
DataNode(Configuration conf, 
     AbstractList<File> dataDirs) throws IOException {
 super(conf);
 datanodeObject = this;
 try {
  startDataNode(conf, dataDirs);
 } catch (IOException ie) {
  shutdown();
  throw ie;
 }
}

代码示例来源:origin: io.fabric8/fabric-hadoop

/**
 * Start a Datanode with specified server sockets for secure environments
 * where they are run with privileged ports and injected from a higher
 * level of capability
 */
DataNode(final Configuration conf,
     final AbstractList<File> dataDirs, SecureResources resources) throws IOException {
 super(conf);
 SecurityUtil.login(conf, DFSConfigKeys.DFS_DATANODE_KEYTAB_FILE_KEY, 
   DFSConfigKeys.DFS_DATANODE_USER_NAME_KEY);
 datanodeObject = this;
 durableSync = conf.getBoolean("dfs.durable.sync", true);
 this.userWithLocalPathAccess = conf
   .get(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY);
 try {
  startDataNode(conf, dataDirs, resources);
 } catch (IOException ie) {
  shutdown();
  throw ie;
 }   
}

代码示例来源:origin: com.facebook.hadoop/hadoop-core

/**
 * Create the DataNode given a configuration and an array of dataDirs.
 * 'dataDirs' is where the blocks are stored.
 */
DataNode(Configuration conf, 
    AbstractList<File> dataDirs) throws IOException {
 super(conf);
 supportAppends = conf.getBoolean("dfs.support.append", false);
 // TODO(pritam): Integrate this into a threadpool for all operations of the
 // datanode.
 blockCopyExecutor = Executors.newCachedThreadPool();
 // Time that the blocking version of  RPC for copying block between
 // datanodes should wait for. Default is 5 minutes.
 blockCopyRPCWaitTime = conf.getInt("dfs.datanode.blkcopy.wait_time",
   5 * 60);
 try {
  startDataNode(this.getConf(), dataDirs);
 } catch (IOException ie) {
  LOG.info("Failed to start datanode " + StringUtils.stringifyException(ie));
  shutdown();
  throw ie;
 }
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

hostName = getHostName(conf);
 LOG.info("Configured hostname is " + hostName);
 startDataNode(conf, dataDirs, resources);
} catch (IOException ie) {
 shutdown();

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

hostName = getHostName(conf);
 LOG.info("Configured hostname is " + hostName);
 startDataNode(conf, dataDirs, resources);
} catch (IOException ie) {
 shutdown();

相关文章

DataNode类方法