sqoop是否将临时数据溢出到磁盘

igetnqfo  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(382)

据我所知,sqoop在不同的数据节点上启动了一些Map器,使jdbc与rdbms连接起来。一旦连接形成,数据就被传输到hdfs。
只是想了解一下,sqoopMap器是否会在磁盘(数据节点)上临时溢出数据?我知道溢出发生在mapreduce中,但不确定sqoop的工作。

a14dhokn

a14dhokn1#

似乎sqoop导入在mapper上运行并且不会溢出。sqoop合并在map reduce上运行,并不会溢出。您可以在sqoop导入运行期间在job tracker上检查它。
请看一下sqoop导入日志的这一部分,它不会溢出、获取和写入hdfs:

INFO [main] ... mapreduce.db.DataDrivenDBRecordReader: Using query:  SELECT...
[main] mapreduce.db.DBRecordReader: Executing query:  SELECT...
INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1
INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor [.snappy]
INFO [Thread-16] ...mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1489705733959_2462784_m_000000_0 is done. And is in the process of committing
INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of task 'attempt_1489705733959_2462784_m_000000_0' to hdfs://

看看这个sqoop合并日志(跳过了一些行),它在磁盘上溢出(注意日志中溢出的Map输出):

INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: hdfs://bla-bla/part-m-00000:0+48322717
    ...
    INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
    ...
    INFO [main] org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 1024
    INFO [main] org.apache.hadoop.mapred.MapTask: soft limit at 751619264
    INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 1073741824
    INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 268435452; length = 67108864
    INFO [main] org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$**MapOutputBuffer**
    INFO [main] com.pepperdata.supervisor.agent.resource.r: Datanode bla-bla is LOCAL.
    INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy]
    ...
    INFO [main] org.apache.hadoop.mapred.MapTask:**Starting flush of map output**
    INFO [main] org.apache.hadoop.mapred.MapTask:**Spilling map output**
    INFO [main] org.apache.hadoop.mapred.MapTask:**bufstart**= 0;**bufend**= 184775274; bufvoid = 1073741824
    INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 268435452(1073741808); kvend = 267347800(1069391200); length = 1087653/67108864
    INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor [.snappy]
[main] org.apache.hadoop.mapred.MapTask: Finished spill 0
    ...Task:attempt_1489705733959_2479291_m_000000_0 is done. And is in the process of committing

相关问题