sqoop导出时出错,如何找出确切的错误?

8wtpewkr  于 2021-06-03  发布在  Sqoop
关注(0)|答案(1)|浏览(442)

执行sqoop导出时:

sqoop export --connect jdbc:mysql://ip-172-31-20-247/dbname --username uname --password pwd --table orders --export-dir /orders.txt

我得到以下错误:

18/11/10 16:18:52 INFO mapreduce.Job:  map 0% reduce 0%
18/11/10 16:19:00 INFO mapreduce.Job:  map 100% reduce 0%
18/11/10 16:19:01 INFO mapreduce.Job: Job job_1537636876515_6580 failed with state FAILED due to: Task failed task_1537636876515_6580_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
18/11/10 16:19:01 INFO mapreduce.Job: Counters: 12
        Job Counters 
                Failed map tasks=1
                Killed map tasks=3
                Launched map tasks=4
                Data-local map tasks=4
                Total time spent by all maps in occupied slots (ms)=61530
                Total time spent by all reduces in occupied slots (ms)=0
                Total time spent by all map tasks (ms)=20510
                Total vcore-milliseconds taken by all map tasks=20510
                Total megabyte-milliseconds taken by all map tasks=31503360
        Map-Reduce Framework
                CPU time spent (ms)=0
                Physical memory (bytes) snapshot=0
                Virtual memory (bytes) snapshot=0
18/11/10 16:19:01 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
18/11/10 16:19:01 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 17.1712 seconds (0 bytes/sec)
18/11/10 16:19:01 INFO mapreduce.ExportJobBase: Exported 0 records.
18/11/10 16:19:01 ERROR mapreduce.ExportJobBase: Export job failed!
18/11/10 16:19:01 ERROR tool.ExportTool: Error during export: Export job failed!

如何确定确切的错误是什么?

9ceoxa92

9ceoxa921#

在没有查看文件数据和其他详细信息的情况下,不确定sqoop导出作业发生了什么。我希望您使用了正确的分隔符,并且您的文件布局和表结构是同步的。
请尝试下面的sqoop导出脚本,并更改参数。这里我有从hdfs文件到sqlserver的sqoop数据。

sqoop export \
--connect "jdbc:sqlserver://servername:1433;databaseName=EMP;" \
--connection-manager org.apache.sqoop.manager.SQLServerManager \
--username userid \
-P \
--table sql_server_table_name \
--input-fields-terminated-by '|' \
--export-dir /hdfs path location of file/part-m-00000 \
--num-mappers 1 \

让我知道它是否适合你。我已经测试了它几次,它的工作没有问题。我的数据由“|”分隔,因此我选择了以“|”结尾的输入字段。您可以根据hdfs上的数据进行选择。

相关问题