设置hadoopMap器的输入拆分数不起作用

wsxa1bj1  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(427)

我正在尝试用不同数量的Map器和还原器多次运行hadoop作业。我已设置配置:
mapreduce.input.fileinputformat.split.maxsize
mapreduce.input.fileinputformat.split.minsize
mapreduce.job.mapsMap
我的文件大小是1160421275,当我试图在代码中用4个Map器和3个还原器配置它时:

Configuration conf = new Configuration();
FileSystem hdfs = FileSystem.get(conf);
long size = hdfs.getContentSummary(new Path("input/filea").getLength();
size+=hdfs.getContentSummary(new Path("input/fileb").getLength();
conf.set("mapreduce.input.fileinputformat.split.maxsize", String.valueOf((size/4)));
conf.set("mapreduce.input.fileinputformat.split.minsize", String.valueOf((size/4)));
conf.set("mapreduce.job.maps",4);
....
job.setNumReduceTask(3);

尺寸/4表示290105318。作业的执行会产生以下输出:

2016-11-19 12:30:36,426 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(287)) - Total input paths to process : 1
2016-11-19 12:30:36,535 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(287)) - Total input paths to process : 4
2016-11-19 12:30:36,572 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) - number of splits:7

拆分的数目是7,而不是4,并且成功作业的输出是:

File System Counters
    FILE: Number of bytes read=18855390277
    FILE: Number of bytes written=14653469965
    FILE: Number of read operations=0
    FILE: Number of large read operations=0
    FILE: Number of write operations=0
Map-Reduce Framework
    Map input records=39184416
    Map output records=36751473
    Map output bytes=787022241
    Map output materialized bytes=860525313
    Input split bytes=1801
    Combine input records=0
    Combine output records=0
    Reduce input groups=25064998
    Reduce shuffle bytes=860525313
    Reduce input records=36751473
    Reduce output records=1953960
    Spilled Records=110254419
    Shuffled Maps =21
    Failed Shuffles=0
    Merged Map outputs=21
    GC time elapsed (ms)=1124
    CPU time spent (ms)=0
    Physical memory (bytes) snapshot=0
    Virtual memory (bytes) snapshot=0
    Total committed heap usage (bytes)=6126829568
Shuffle Errors
    BAD_ID=0
    CONNECTION=0
    IO_ERROR=0
    WRONG_LENGTH=0
    WRONG_MAP=0
    WRONG_REDUCE=0
File Input Format Counters 
    Bytes Read=0
File Output Format Counters 
    Bytes Written=77643084

Map显示,它处理了21个乱序Map。我只想处理4个Map。对于reducer,它给出了正确的文件数,文件总数为3。我的Map器拆分大小设置是否错误?

dbf7pr2w

dbf7pr2w1#

我相信你用的是文本输入格式。
如果您有多个文件,那么至少会为每个文件生成一个Map器。如果文件大小(不是累积的,而是单个的)大于块大小(您通过设置最小值和最大值对其进行了调整),则会再次生成更多的Map器。
尝试使用combinetextinputformat,这将有助于你实现你想要的,但仍然可能不完全是4。
看看您用来确定将产生多少Map器的输入格式的逻辑。

相关问题