下面是将sql从sqlserver导入配置单元的命令
sqoop import --connect 'jdbc:sqlserver://10.0.2.11:1433;database=SP2010' --username pbddms -P --table daily_language --hive-import --hive-database test_hive --hive-table daily_language --hive-overwrite --hive-drop-import-delims --null-string '\\N' --null-non-string '\\N'
但结果是
19/02/22 09:10:24 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.5.0-292
19/02/22 09:10:24 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
19/02/22 09:10:24 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
19/02/22 09:10:24 INFO manager.SqlManager: Using default fetchSize of 1000
19/02/22 09:10:24 INFO tool.CodeGenTool: Beginning code generation
19/02/22 09:10:25 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM [daily_language] AS t WHERE 1=0
19/02/22 09:10:25 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hdp/2.6.5.0-292/hadoop-mapreduce
Note: /tmp/sqoop-root/compile/ddab816638bd5e65108647177ab703b0/daily_language.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
19/02/22 09:10:27 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/ddab816638bd5e65108647177ab703b0/daily_language.jar
19/02/22 09:10:27 INFO mapreduce.ImportJobBase: Beginning import of daily_language
19/02/22 09:10:29 INFO client.RMProxy: Connecting to ResourceManager at mghdop01.dcdms/10.0.37.157:8050
19/02/22 09:10:29 INFO client.AHSProxy: Connecting to Application History server at mghdop01.dcdms/10.0.37.157:10200
19/02/22 09:10:31 INFO db.DBInputFormat: Using read commited transaction isolation
19/02/22 09:10:31 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN([kdbahasa]), MAX([kdbahasa]) FROM [daily_language]
19/02/22 09:10:31 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/root/.staging/job_1547085556146_0680
19/02/22 09:10:31 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Generating splits for a textual index column allowed only in case of "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" property passed as a parameter
at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:204)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:200)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:173)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:270)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692)
at org.apache.sqoop.manager.SQLServerManager.importTable(SQLServerManager.java:163)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:507)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:615)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
Caused by: Generating splits for a textual index column allowed only in case of "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" property passed as a parameter
at org.apache.sqoop.mapreduce.db.TextSplitter.split(TextSplitter.java:67)
at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:201)
... 23 more
为什么会有
error tool.importtool:运行导入作业时遇到ioexception:java.io.ioexception:仅在“-dorg.apache.sqoop.splitter.allow\u text\u splitter=true”属性作为参数传递时才允许为文本索引列生成拆分
尽管我没有在上面的sqoop导入中给出split by。首先,我如何才能解决上述情况?
然后我尝试在上面的sqoop import中添加“-dorg.apache.sqoop.splitter.allow\u text\u splitter=true”,但它在下面给出了另一个错误;
19/02/22 09:20:43 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.5.0-292
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Error parsing arguments for import:
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: Dorg.apache.sqoop.splitter.allow_text_splitter=true
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: --username
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: pbddms
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: -P
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: --table
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: daily_language
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: --hive-import
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: --hive-database
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: test_hive
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: --hive-table
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: daily_language
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: --hive-overwrite
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: --hive-drop-import-delims
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: --null-string
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: \\N
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: --null-non-string
19/02/22 09:20:43 ERROR tool.BaseSqoopTool: Unrecognized argument: \\N\
第二种情况,我如何解决上述情况?
1条答案
按热度按时间yi0zb3m41#
我接受
kdbahasa
列作为拆分列。添加-m 1
参数指定Map器的数目。1
-意味着它将在单个Map器上运行而不进行拆分:如果要拆分,请阅读“关于拆分”列:https://stackoverflow.com/a/37389134/2700344