你可以用 hadoop fs -du -h**path to hbase**/hbase 在/hbase下,每个表又占用一个文件夹。。。 hadoop fs -ls -Rpath to hbase/hbase hadoop fs -du -h**path to hbase**/hbase/tablename ####java hdfs客户端: 同样,您可以使用JavaHDFS客户机,通过在hbase根目录下传递每个表路径,如下所示。。。检查 getSizeOfPaths & getSizeOfDirectory 方法
public class HdfsUtil {
/**
* Estimates the number of splits by taking the size of the paths and dividing by the splitSize.
*
* @param paths
* @param configuration
* @param splitSize
* @return
* @throws IOException
*/
public static long getNumOfSplitsForInputs(Path[] paths, Configuration configuration, long splitSize) throws IOException
{
long size = getSizeOfPaths(paths, configuration);
long splits = (int) Math.ceil( size / (splitSize)) ;
return splits;
}
public static long getSizeOfPaths(Path[] paths, Configuration configuration) throws IOException
{
long totalSize = 0L;
for(Path path: paths)
{
totalSize += getSizeOfDirectory(path, configuration);
}
return totalSize;
}
// here you can give hbase path folder which was described through shell
public static long getSizeOfDirectory(Path path, Configuration configuration) throws IOException {
//Get the file size of the unannotated Edges
FileSystem fileSystem = FileSystem.get(configuration);
long size = fileSystem.getContentSummary(path).getLength();
/**static String byteCountToDisplaySize(BigInteger size)
Returns a human-readable version of the file size, where the input represents a specific number of bytes.**/
System.out.println(FileUtils.byteCountToDisplaySize(size))
return size;
}
}
1条答案
按热度按时间vltsax251#
一种方法是您必须使用java客户机访问hdfs,通常在
/hbase
把所有的表信息都放进文件夹。将出现。hadoop外壳:
你可以用
hadoop fs -du -h**path to hbase**/hbase
在/hbase下,每个表又占用一个文件夹。。。hadoop fs -ls -Rpath to hbase/hbase
hadoop fs -du -h**path to hbase**/hbase/tablename
####java hdfs客户端:同样,您可以使用JavaHDFS客户机,通过在hbase根目录下传递每个表路径,如下所示。。。检查
getSizeOfPaths
&getSizeOfDirectory
方法