sqoop使用“-direct”选项失败,mysqldump退出代码为2和3

u3r8eeie  于 2021-05-29  发布在  Hadoop
关注(0)|答案(2)|浏览(793)

我在aws emr中运行sqoop。我正在尝试将一个表从mysql复制到hdfs中~10gb。
我得到以下例外

15/07/06 12:19:07 INFO mapreduce.Job: Task Id : attempt_1435664372091_0048_m_000000_2, Status : FAILED
Error: java.io.IOException: mysqldump terminated with status 3
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:485)
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:49)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:152)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:773)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)

15/07/06 12:19:07 INFO mapreduce.Job: Task Id : attempt_1435664372091_0048_m_000005_2, Status : FAILED
Error: java.io.IOException: mysqldump terminated with status 2
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:485)
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:49)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:152)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:773)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)

15/07/06 12:19:08 INFO mapreduce.Job:  map 0% reduce 0%
15/07/06 12:19:20 INFO mapreduce.Job:  map 25% reduce 0%
15/07/06 12:19:22 INFO mapreduce.Job:  map 38% reduce 0%
15/07/06 12:19:23 INFO mapreduce.Job:  map 50% reduce 0%
15/07/06 12:19:24 INFO mapreduce.Job:  map 75% reduce 0%
15/07/06 12:19:25 INFO mapreduce.Job:  map 100% reduce 0%

15/07/06 12:23:11 INFO mapreduce.Job: Job job_1435664372091_0048 failed with state FAILED due to: Task failed task_1435664372091_0048_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

15/07/06 12:23:11 INFO mapreduce.Job: Counters: 8
        Job Counters 
        Failed map tasks=28
        Launched map tasks=28
        Other local map tasks=28
        Total time spent by all maps in occupied slots (ms)=34760760
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=5793460
        Total vcore-seconds taken by all map tasks=5793460
        Total megabyte-seconds taken by all map tasks=8342582400
15/07/06 12:23:11 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
15/07/06 12:23:11 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 829.8697 seconds (0 bytes/sec)
15/07/06 12:23:11 WARN mapreduce.Counters: Group   org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
15/07/06 12:23:11 INFO mapreduce.ImportJobBase: Retrieved 0 records.
15/07/06 12:23:11 ERROR tool.ImportTool: Error during import: Import job failed!

如果在没有“-direct”选项的情况下运行,则会得到如下所示的通信异常https://issues.cloudera.org/browse/sqoop-186
我已经在mysql中将“net write timeout”和“net read timeout”的值设置为6000。
我的sqoop命令如下所示

sqoop import -D mapred.task.timeout=0 --fields-terminated-by '\t' --escaped-by '\\' --optionally-enclosed-by '\"' --bindir ./ --connect jdbc:mysql://<remote ip>/<mysql db> --username tuser --password tuser --table table1 --target-dir=/base/table1 --split-by id -m 8 --direct

如何修复相同的问题?我错过了什么吗。
我还创造了sqoop jira-https://issues.apache.org/jira/browse/sqoop-2411

b1payxdu

b1payxdu1#

当sqoop不能均匀地划分键空间,并且其中一个map任务处理零行数据时,我看到了这个错误。可能的解决方法是更改Map器的数量( -n )或者指定不同的键列( --split-by )具有均匀分布的值。

pgx2nnw8

pgx2nnw82#

你能试着运行下面的命令看看它是否有效吗。不确定,但我猜您的sqoop导入命令有问题。

sqoop import --connect "jdbc:mysql://<remote ip>/<mysql db>" --password "core" --username "core" --table "TABLENAME" --target-dir "/sqoopfile2" -m 8 --direct

相关问题