在cloudera quickstart vm中导入所有表时在sqoop中获取错误

9lowa7mx  于 2021-05-29  发布在  Hadoop
关注(0)|答案(2)|浏览(382)

尝试通过sqoop导入所有表时,出现以下错误:
sqoop import all tables-m 12--connect“在此处输入代码”jdbc:mysql://快速启动。cloudera:3306/retail_db“--username=retail\u dba--password=cloudera--warehouse dir=/r/cloudera/sqoop\u导入”

Please set $ACCUMULO_HOME to the root of your Accumulo installation.
            17/04/23 15:29:27 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.8.0
            17/04/23 15:29:27 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
            17/04/23 15:29:27 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
            17/04/23 15:29:27 INFO tool.CodeGenTool: Beginning code generation
            17/04/23 15:29:27 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1
            17/04/23 15:29:27 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1
            17/04/23 15:29:27 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
            Note: /tmp/sqoop-cloudera/compile/e8e72a2e112fced2b0f3251b5666473d/categories.java uses or overrides a deprecated API.
            Note: Recompile with -Xlint:deprecation for details.
            17/04/23 15:29:30 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/e8e72a2e112fced2b0f3251b5666473d/categories.jar
            17/04/23 15:29:30 WARN manager.MySQLManager: It looks like you are importing from mysql.
            17/04/23 15:29:30 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
            17/04/23 15:29:30 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
            17/04/23 15:29:30 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
            17/04/23 15:29:30 INFO mapreduce.ImportJobBase: Beginning import of categories
            17/04/23 15:29:31 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
            17/04/23 15:29:32 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
            17/04/23 15:29:32 INFO client.RMProxy: Connecting to ResourceManager at quickstart.cloudera/192.168.40.134:8032
            17/04/23 15:29:37 INFO db.DBInputFormat: Using read commited transaction isolation
            17/04/23 15:29:37 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`category_id`), MAX(`category_id`) FROM `categories`
            17/04/23 15:29:37 INFO db.IntegerSplitter: Split size: 4; Num splits: 12 from: 1 to: 58
            17/04/23 15:29:38 INFO mapreduce.JobSubmitter: number of splits:12
            17/04/23 15:29:38 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1492945339848_0010
            17/04/23 15:29:39 INFO impl.YarnClientImpl: Submitted application application_1492945339848_0010
            17/04/23 15:29:39 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1492945339848_0010/
            17/04/23 15:29:39 INFO mapreduce.Job: Running job: job_1492945339848_0010
            17/04/23 15:29:52 INFO mapreduce.Job: Job job_1492945339848_0010 running in uber mode : false
            17/04/23 15:29:52 INFO mapreduce.Job:  map 0% reduce 0%
            17/04/23 15:29:52 INFO mapreduce.Job: Job job_1492945339848_0010 failed with state FAILED due to: Application application_1492945339848_0010 failed 2 times due to AM Container for appattempt_1492945339848_0010_000002 exited with  exitCode: 1
            For more detailed output, check application tracking page:http://quickstart.cloudera:8088/proxy/application_1492945339848_0010/Then, click on links to logs of each attempt.
            Diagnostics: Exception from container-launch.
            Container id: container_1492945339848_0010_02_000001
            Exit code: 1
            Stack trace: ExitCodeException exitCode=1: 
                at org.apache.hadoop.util.Shell.runCommand(Shell.java:578)
                at org.apache.hadoop.util.Shell.run(Shell.java:481)
                at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:763)
                at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213)
                at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
                at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
                at java.util.concurrent.FutureTask.run(FutureTask.java:262)
                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
                at java.lang.Thread.run(Thread.java:745)

            Container exited with a non-zero exit code 1
            Failing this attempt. Failing the application.
            17/04/23 15:29:52 INFO mapreduce.Job: Counters: 0
            17/04/23 15:29:52 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
            17/04/23 15:29:52 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 19.6175 seconds (0 bytes/sec)
            17/04/23 15:29:52 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
            17/04/23 15:29:52 INFO mapreduce.ImportJobBase: Retrieved 0 records.
            17/04/23 15:29:52 ERROR tool.ImportAllTablesTool: Error during import: Import job failed!`enter
vwoqyblh

vwoqyblh1#

尝试加载第一个表而不是导入所有表,同时尝试限制Map程序使用导入所有表时,12个Map程序会妨碍vm上的内存。

sqoop import-all-tables \
--connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \
--warehouse-dir=/user/cloudera/sqoop_import
--username=retail_dba \
--password=cloudera
-m 2
bqf10yzr

bqf10yzr2#

看起来应用程序主程序被反复杀死意味着,他们没有得到他们想要的内存。如果您只是在cloudera虚拟机上尝试sqoop,请不要使用 -m 12 ,这将尝试生成12个并行Map任务,您(单个)计算机可能无法处理这些任务。完全放弃那个设置,或者尝试一下 --direct 相反。这是怎么回事 --warehousedir=/r/cloudera/sqoop_import ? 是 /r/ 或者应该是错别字 /user/ 请尝试以下操作:

sqoop import-all-tables \
--connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \
--warehouse-dir=/user/cloudera/sqoop_import
--username=retail_dba \
--direct
--password=cloudera;

相关问题