hdfs块问题

lymnna71  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(310)

当我运行fsck命令时,它显示总块数为68(平均块大小286572b)。我怎么能只有68个街区?
我最近安装了cdh5版本:hadoop2.6.0

[hdfs@cluster1 ~]$hdfs fsck美元/

Connecting to namenode via http://cluster1.abc:50070
FSCK started by hdfs (auth:SIMPLE) from /192.168.101.241 for path / at Fri Sep 25 09:51:56 EDT 2015
....................................................................Status:     HEALTHY
 Total size: 19486905 B
 Total dirs: 569
 Total files: 68
 Total symlinks: 0
 Total blocks (validated): 68 (avg. block size 286572 B)
 Minimally replicated blocks: 68 (100.0 %)
 Over-replicated blocks: 0 (0.0 %)
 Under-replicated blocks: 0 (0.0 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor: 3
 Average block replication: 1.9411764
 Corrupt blocks: 0
 Missing replicas: 0 (0.0 %)
 Number of data-nodes: 3
 Number of racks: 1
 FSCK ended at Fri Sep 25 09:51:56 EDT 2015 in 41 milliseconds

The filesystem under path '/' is HEALTHY

这是我运行hdfsadmin-repot命令时得到的结果:
[hdfs@cluster1 ~]$hdfs dfsadmin-报告

Configured Capacity: 5715220577895 (5.20 TB)
Present Capacity: 5439327449088 (4.95 TB)
DFS Remaining: 5439303270400 (4.95 TB)
DFS Used: 24178688 (23.06 MB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 504

另外,我的配置单元查询没有启动mapreduce作业,这可能是上述问题吗?
有什么建议吗?
谢谢您!

iklwldmw

iklwldmw1#

块是分布在文件系统节点中的数据块。例如,如果你有一个200mb的文件,实际上会有两个128和72MB的块。
因此,不要担心这些块,因为它们是由框架处理的。如fsck报告所示,hdfs中有68个文件,因此有68个块。

相关问题