org.apache.hadoop.fs.FileUtil.listFiles()方法的使用及代码示例

x33g5p2x  于2022-01-19 转载在 其他  
字(11.7k)|赞(0)|评价(0)|浏览(205)

本文整理了Java中org.apache.hadoop.fs.FileUtil.listFiles()方法的一些代码示例,展示了FileUtil.listFiles()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。FileUtil.listFiles()方法的具体详情如下:
包路径:org.apache.hadoop.fs.FileUtil
类名称:FileUtil
方法名:listFiles

FileUtil.listFiles介绍

[英]A wrapper for File#listFiles(). This java.io API returns null when a dir is not a directory or for any I/O error. Instead of having null check everywhere File#listFiles() is used, we will add utility API to get around this problem. For the majority of cases where we prefer an IOException to be thrown.
[中]文件#listFiles()的包装器。这是java。当目录不是目录或任何I/O错误时,io API返回null。我们将添加实用程序API来解决这个问题,而不是在使用文件#listFiles()时处处进行空检查。对于我们希望抛出IOException的大多数情况。

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-common

/**
 * Delete the given path to a file or directory.
 * @param p the path to delete
 * @param recursive to delete sub-directories
 * @return true if the file or directory and all its contents were deleted
 * @throws IOException if p is non-empty and recursive is false 
 */
@Override
public boolean delete(Path p, boolean recursive) throws IOException {
 File f = pathToFile(p);
 if (!f.exists()) {
  //no path, return false "nothing to delete"
  return false;
 }
 if (f.isFile()) {
  return f.delete();
 } else if (!recursive && f.isDirectory() && 
   (FileUtil.listFiles(f).length != 0)) {
  throw new IOException("Directory " + f.toString() + " is not empty");
 }
 return FileUtil.fullyDelete(f);
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

return false;
File contents[] = listFiles(src);
for (int i = 0; i < contents.length; i++) {
 copy(contents[i], dstFS, new Path(dst, contents[i].getName()),

代码示例来源:origin: apache/hbase

/**
 * List all of the files in 'dir' that match the regex 'pattern'.
 * Then check that this list is identical to 'expectedMatches'.
 * @throws IOException if the dir is inaccessible
 */
public static void assertGlobEquals(File dir, String pattern,
  String ... expectedMatches) throws IOException {
 Set<String> found = Sets.newTreeSet();
 for (File f : FileUtil.listFiles(dir)) {
  if (f.getName().matches(pattern)) {
   found.add(f.getName());
  }
 }
 Set<String> expectedSet = Sets.newTreeSet(
   Arrays.asList(expectedMatches));
 Assert.assertEquals("Bad files matching " + pattern + " in " + dir,
   Joiner.on(",").join(expectedSet),
   Joiner.on(",").join(found));
}

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

/**
 * @return true if the storage directory should prompt the user prior
 * to formatting (i.e if the directory appears to contain some data)
 * @throws IOException if the SD cannot be accessed due to an IO error
 */
@Override
public boolean hasSomeData() throws IOException {
 // Its alright for a dir not to exist, or to exist (properly accessible)
 // and be completely empty.
 if (!root.exists()) return false;
 
 if (!root.isDirectory()) {
  // a file where you expect a directory should not cause silent
  // formatting
  return true;
 }
 
 if (FileUtil.listFiles(root).length == 0) {
  // Empty dir can format without prompt.
  return false;
 }
 
 return true;
}

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

/**
 * returns matching edit logs via the log directory. Simple helper function
 * that lists the files in the logDir and calls matchEditLogs(File[])
 * 
 * @param logDir
 *          directory to match edit logs in
 * @return matched edit logs
 * @throws IOException
 *           IOException thrown for invalid logDir
 */
public static List<EditLogFile> matchEditLogs(File logDir) throws IOException {
 return matchEditLogs(FileUtil.listFiles(logDir));
}

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

/**
 * Purge files in the given directory which match any of the set of patterns.
 * The patterns must have a single numeric capture group which determines
 * the associated transaction ID of the file. Only those files for which
 * the transaction ID is less than the <code>minTxIdToKeep</code> parameter
 * are removed.
 */
private static void purgeMatching(File dir, List<Pattern> patterns,
  long minTxIdToKeep) throws IOException {
 for (File f : FileUtil.listFiles(dir)) {
  if (!f.isFile()) continue;
  
  for (Pattern p : patterns) {
   Matcher matcher = p.matcher(f.getName());
   if (matcher.matches()) {
    // This parsing will always succeed since the group(1) is
    // /\d+/ in the regex itself.
    long txid = Long.parseLong(matcher.group(1));
    if (txid < minTxIdToKeep) {
     LOG.info("Purging no-longer needed file {}", txid);
     if (!f.delete()) {
      LOG.warn("Unable to delete no-longer-needed data {}", f);
     }
     break;
    }
   }
  }
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

/**
 * Clear and re-create storage directory.
 * <p>
 * Removes contents of the current directory and creates an empty directory.
 * 
 * This does not fully format storage directory. 
 * It cannot write the version file since it should be written last after  
 * all other storage type dependent files are written.
 * Derived storage is responsible for setting specific storage values and
 * writing the version file to disk.
 * 
 * @throws IOException
 */
public void clearDirectory() throws IOException {
 File curDir = this.getCurrentDir();
 if (curDir == null) {
  // if the directory is null, there is nothing to do.
  return;
 }
 if (curDir.exists()) {
  File[] files = FileUtil.listFiles(curDir);
  LOG.info("Will remove files: {}", Arrays.toString(files));
  if (!(FileUtil.fullyDelete(curDir)))
   throw new IOException("Cannot remove current directory: " + curDir);
 }
 if (!curDir.mkdirs())
  throw new IOException("Cannot create directory " + curDir);
}

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

@Override
public void purgeLogsOlderThan(long minTxIdToKeep)
  throws IOException {
 LOG.info("Purging logs older than " + minTxIdToKeep);
 File[] files = FileUtil.listFiles(sd.getCurrentDir());
 List<EditLogFile> editLogs = matchEditLogs(files, true);
 for (EditLogFile log : editLogs) {
  if (log.getFirstTxId() < minTxIdToKeep &&
    log.getLastTxId() < minTxIdToKeep) {
   purger.purgeLog(log);
  }
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

File filesInStorage[];
try {
 filesInStorage = FileUtil.listFiles(currentDir);
} catch (IOException ioe) {
 LOG.warn("Unable to inspect storage directory " + currentDir,

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

/**
 * Get a listing of the given directory using
 * {@link FileUtil#listFiles(File)}.
 *
 * @param volume  target volume. null if unavailable.
 * @param dir  Directory to be listed.
 * @return  array of file objects representing the directory entries.
 * @throws IOException
 */
public File[] listFiles(
  @Nullable FsVolumeSpi volume, File dir) throws IOException {
 final long begin = profilingEventHook.beforeMetadataOp(volume, LIST);
 try {
  faultInjectorEventHook.beforeMetadataOp(volume, LIST);
  File[] children = FileUtil.listFiles(dir);
  profilingEventHook.afterMetadataOp(volume, LIST, begin);
  return children;
 } catch(Exception e) {
  onFailure(volume, begin);
  throw e;
 }
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

/**
 * @return true if the storage directory should prompt the user prior
 * to formatting (i.e if the directory appears to contain some data)
 * @throws IOException if the SD cannot be accessed due to an IO error
 */
@Override
public boolean hasSomeData() throws IOException {
 // Its alright for a dir not to exist, or to exist (properly accessible)
 // and be completely empty.
 if (!root.exists()) return false;
 
 if (!root.isDirectory()) {
  // a file where you expect a directory should not cause silent
  // formatting
  return true;
 }
 
 if (FileUtil.listFiles(root).length == 0) {
  // Empty dir can format without prompt.
  return false;
 }
 
 return true;
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

/**
 * returns matching edit logs via the log directory. Simple helper function
 * that lists the files in the logDir and calls matchEditLogs(File[])
 * 
 * @param logDir
 *          directory to match edit logs in
 * @return matched edit logs
 * @throws IOException
 *           IOException thrown for invalid logDir
 */
public static List<EditLogFile> matchEditLogs(File logDir) throws IOException {
 return matchEditLogs(FileUtil.listFiles(logDir));
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

/**
 * returns matching edit logs via the log directory. Simple helper function
 * that lists the files in the logDir and calls matchEditLogs(File[])
 * 
 * @param logDir
 *          directory to match edit logs in
 * @return matched edit logs
 * @throws IOException
 *           IOException thrown for invalid logDir
 */
public static List<EditLogFile> matchEditLogs(File logDir) throws IOException {
 return matchEditLogs(FileUtil.listFiles(logDir));
}

代码示例来源:origin: gluster/glusterfs-hadoop

/**
  * Delete the given path to a file or directory.
  * @param p the path to delete
  * @param recursive to delete sub-directories
  * @return true if the file or directory and all its contents were deleted
  * @throws IOException if p is non-empty and recursive is false 
  */
@Override
public boolean delete(Path p, boolean recursive) throws IOException {
  File f = pathToFile(p);
  if(!f.exists()){
    /* HCFS semantics expect 'false' if attempted file deletion on non existent file */
    return false;
  }else if (f.isFile()) {
   return f.delete();
  } else if (!recursive && f.isDirectory() && 
    (FileUtil.listFiles(f).length != 0)) {
   throw new IOException("Directory " + f.toString() + " is not empty");
  }
  return FileUtil.fullyDelete(f);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

/**
 * Delete the given path to a file or directory.
 * @param p the path to delete
 * @param recursive to delete sub-directories
 * @return true if the file or directory and all its contents were deleted
 * @throws IOException if p is non-empty and recursive is false 
 */
@Override
public boolean delete(Path p, boolean recursive) throws IOException {
 File f = pathToFile(p);
 if (!f.exists()) {
  //no path, return false "nothing to delete"
  return false;
 }
 if (f.isFile()) {
  return f.delete();
 } else if (!recursive && f.isDirectory() && 
   (FileUtil.listFiles(f).length != 0)) {
  throw new IOException("Directory " + f.toString() + " is not empty");
 }
 return FileUtil.fullyDelete(f);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

@Override
public void purgeLogsOlderThan(long minTxIdToKeep)
  throws IOException {
 LOG.info("Purging logs older than " + minTxIdToKeep);
 File[] files = FileUtil.listFiles(sd.getCurrentDir());
 List<EditLogFile> editLogs = matchEditLogs(files, true);
 for (EditLogFile log : editLogs) {
  if (log.getFirstTxId() < minTxIdToKeep &&
    log.getLastTxId() < minTxIdToKeep) {
   purger.purgeLog(log);
  }
 }
}

代码示例来源:origin: ch.cern.hadoop/hadoop-common

/**
 * List all of the files in 'dir' that match the regex 'pattern'.
 * Then check that this list is identical to 'expectedMatches'.
 * @throws IOException if the dir is inaccessible
 */
public static void assertGlobEquals(File dir, String pattern,
  String ... expectedMatches) throws IOException {
 
 Set<String> found = Sets.newTreeSet();
 for (File f : FileUtil.listFiles(dir)) {
  if (f.getName().matches(pattern)) {
   found.add(f.getName());
  }
 }
 Set<String> expectedSet = Sets.newTreeSet(
   Arrays.asList(expectedMatches));
 Assert.assertEquals("Bad files matching " + pattern + " in " + dir,
   Joiner.on(",").join(expectedSet),
   Joiner.on(",").join(found));
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-common

/**
 * List all of the files in 'dir' that match the regex 'pattern'.
 * Then check that this list is identical to 'expectedMatches'.
 * @throws IOException if the dir is inaccessible
 */
public static void assertGlobEquals(File dir, String pattern,
  String ... expectedMatches) throws IOException {
 
 Set<String> found = Sets.newTreeSet();
 for (File f : FileUtil.listFiles(dir)) {
  if (f.getName().matches(pattern)) {
   found.add(f.getName());
  }
 }
 Set<String> expectedSet = Sets.newTreeSet(
   Arrays.asList(expectedMatches));
 Assert.assertEquals("Bad files matching " + pattern + " in " + dir,
   Joiner.on(",").join(expectedSet),
   Joiner.on(",").join(found));
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

@Override
public void purgeLogsOlderThan(long minTxIdToKeep)
  throws IOException {
 LOG.info("Purging logs older than " + minTxIdToKeep);
 File[] files = FileUtil.listFiles(sd.getCurrentDir());
 List<EditLogFile> editLogs = matchEditLogs(files, true);
 for (EditLogFile log : editLogs) {
  if (log.getFirstTxId() < minTxIdToKeep &&
    log.getLastTxId() < minTxIdToKeep) {
   purger.purgeLog(log);
  }
 }
}

代码示例来源:origin: org.apache.hbase/hbase-server

/**
 * List all of the files in 'dir' that match the regex 'pattern'.
 * Then check that this list is identical to 'expectedMatches'.
 * @throws IOException if the dir is inaccessible
 */
public static void assertGlobEquals(File dir, String pattern,
  String ... expectedMatches) throws IOException {
 Set<String> found = Sets.newTreeSet();
 for (File f : FileUtil.listFiles(dir)) {
  if (f.getName().matches(pattern)) {
   found.add(f.getName());
  }
 }
 Set<String> expectedSet = Sets.newTreeSet(
   Arrays.asList(expectedMatches));
 Assert.assertEquals("Bad files matching " + pattern + " in " + dir,
   Joiner.on(",").join(expectedSet),
   Joiner.on(",").join(found));
}

相关文章