org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations()方法的使用及代码示例

x33g5p2x  于2022-01-18 转载在 其他  
字(3.6k)|赞(0)|评价(0)|浏览(113)

本文整理了Java中org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations()方法的一些代码示例,展示了DataNode.checkStorageLocations()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。DataNode.checkStorageLocations()方法的具体详情如下:
包路径:org.apache.hadoop.hdfs.server.datanode.DataNode
类名称:DataNode
方法名:checkStorageLocations

DataNode.checkStorageLocations介绍

暂无

代码示例

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

/**
 * Make an instance of DataNode after ensuring that at least one of the
 * given data directories (and their parent directories, if necessary)
 * can be created.
 * @param dataDirs List of directories, where the new DataNode instance should
 * keep its files.
 * @param conf Configuration instance to use.
 * @param resources Secure resources needed to run under Kerberos
 * @return DataNode instance for given list of data dirs and conf, or null if
 * no directory from this directory list can be created.
 * @throws IOException
 */
static DataNode makeInstance(Collection<StorageLocation> dataDirs,
  Configuration conf, SecureResources resources) throws IOException {
 LocalFileSystem localFS = FileSystem.getLocal(conf);
 FsPermission permission = new FsPermission(
   conf.get(DFS_DATANODE_DATA_DIR_PERMISSION_KEY,
        DFS_DATANODE_DATA_DIR_PERMISSION_DEFAULT));
 DataNodeDiskChecker dataNodeDiskChecker =
   new DataNodeDiskChecker(permission);
 List<StorageLocation> locations =
   checkStorageLocations(dataDirs, localFS, dataNodeDiskChecker);
 DefaultMetricsSystem.initialize("DataNode");
 assert locations.size() > 0 : "number of data directories should be > 0";
 return new DataNode(conf, locations, resources);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

/**
 * Make an instance of DataNode after ensuring that at least one of the
 * given data directories (and their parent directories, if necessary)
 * can be created.
 * @param dataDirs List of directories, where the new DataNode instance should
 * keep its files.
 * @param conf Configuration instance to use.
 * @param resources Secure resources needed to run under Kerberos
 * @return DataNode instance for given list of data dirs and conf, or null if
 * no directory from this directory list can be created.
 * @throws IOException
 */
static DataNode makeInstance(Collection<StorageLocation> dataDirs,
  Configuration conf, SecureResources resources) throws IOException {
 LocalFileSystem localFS = FileSystem.getLocal(conf);
 FsPermission permission = new FsPermission(
   conf.get(DFS_DATANODE_DATA_DIR_PERMISSION_KEY,
        DFS_DATANODE_DATA_DIR_PERMISSION_DEFAULT));
 DataNodeDiskChecker dataNodeDiskChecker =
   new DataNodeDiskChecker(permission);
 List<StorageLocation> locations =
   checkStorageLocations(dataDirs, localFS, dataNodeDiskChecker);
 DefaultMetricsSystem.initialize("DataNode");
 assert locations.size() > 0 : "number of data directories should be > 0";
 return new DataNode(conf, locations, resources);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

@Test (timeout = 30000)
 public void testDataDirValidation() throws Throwable {
  
  DataNodeDiskChecker diskChecker = mock(DataNodeDiskChecker.class);
  doThrow(new IOException()).doThrow(new IOException()).doNothing()
   .when(diskChecker).checkDir(any(LocalFileSystem.class), any(Path.class));
  LocalFileSystem fs = mock(LocalFileSystem.class);
  AbstractList<StorageLocation> locations = new ArrayList<StorageLocation>();

  locations.add(StorageLocation.parse("file:/p1/"));
  locations.add(StorageLocation.parse("file:/p2/"));
  locations.add(StorageLocation.parse("file:/p3/"));

  List<StorageLocation> checkedLocations =
    DataNode.checkStorageLocations(locations, fs, diskChecker);
  assertEquals("number of valid data dirs", 1, checkedLocations.size());
  String validDir = checkedLocations.iterator().next().getFile().getPath();
  assertThat("p3 should be valid", new File("/p3/").getPath(), is(validDir));
 }
}

相关文章

DataNode类方法