从mysql数据库导入时无法复制到hadoop中的datanode

lstz6jyr  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(391)

我正在尝试将数据从mysql表导入hdfs。我正在使用下面的sqoop导入命令

sqoop import --connect jdbc:mysql://localhost:3306/employee --username root --password***--table Emp --m 1

我得到下面的错误

16/05/07 20:01:18 ERROR tool.ImportTool: Encountered IOException running import job: java.io.FileNotFoundException: File does not exist: hdfs://localhost:54310/usr/lib/sqoop/lib/parquet-format-2.0.0.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:269)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:390)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:483)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:196)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:169)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:266)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:673)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:118)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)

我在usr/lib/sqoop文件夹中有parquet-format-2.0.0.jar,但即使这样它也显示了错误。
我试图将所有的sqoop库导入hdfs,但是我可以做到这一点:抛出下面的错误
2007年5月16日18:40:11警告hdfs.dfsclient:datastreamer exception org.apache.hadoop.ipc.remoteexception(java.io.ioexception):文件/usr/lib/sqoop/lib/xz-1.0.jar.copying只能复制到0节点,而不能复制到minreplication(=1)。有0个datanode正在运行,此操作中没有排除任何节点。在org.apache.hadoop.hdfs.server.blockmanagement.blockmanager.choosetarget4newblock(blockmanager。java:1549)位于org.apache.hadoop.hdfs.server.namenode.fsnamesystem.getadditionalblock(fsnamesystem)。java:3200)在org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.addblock(namenoderpcserver。java:641)在org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.addblock(clientnamenodeprotocolserversidetranslatorpb。java:482)org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod(clientnamenodeprotocolprotos.java)位于org.apache.hadoop.ipc.protobufrpceengine$server$protobufrpinvoker.call(protobufrpceengine。java:619)在org.apache.hadoop.ipc.rpc$server.call(rpc。java:962)在org.apache.hadoop.ipc.server$handler$1.run(server。java:2039)在org.apache.hadoop.ipc.server$handler$1.run(server。java:2035)位于java.security.accesscontroller.doprivileged(本机方法)javax.security.auth.subject.doas(主题。java:415)在
你现在能做什么?我不能将jar文件复制到hdfs,也不能将数据从mysql导入hdfs。
我尝试了此解决方案sqoop import eror-文件不存在:
但不能从第二步开始。我还清除了缓存并重新启动了hadoop文件系统。
谢谢

6tqwzwtp

6tqwzwtp1#

对于datanode复制错误,可能有以下原因:

Datanodes doesn’t have enough disk space to store the blocks

Namenode can not reach Datanode(s) or Datanode(s) could be down/unavailable

确保namenode和datanode之间的连接以及datanode有足够的空间来存储新的块。
如果磁盘空间足够,则需要重新格式化namenode。

相关问题