org.apache.ignite.igniteexception:对于输入字符串:IgniteHadoop执行中的“30s”

9gm1akwq  于 2021-05-27  发布在  Hadoop
关注(0)|答案(2)|浏览(350)

我想执行一个wordcount示例 Hadoop 结束 apache ignite . 我在ignite中使用igfs作为hdfs配置的缓存,但是在通过hadoop提交作业以便在ignite上执行之后,我遇到了以下错误。
提前感谢任何能帮助我的人!

Using configuration: examples/config/filesystem/example-igfs-hdfs.xml

[00:47:13]    __________  ________________ 
[00:47:13]   /  _/ ___/ |/ /  _/_  __/ __/ 
[00:47:13]  _/ // (7 7    // /  / / / _/   
[00:47:13] /___/\___/_/|_/___/ /_/ /___/  
[00:47:13] 
[00:47:13] ver. 2.6.0#20180710-sha1:669feacc
[00:47:13] 2018 Copyright(C) Apache Software Foundation
[00:47:13] 
[00:47:13] Ignite documentation: http://ignite.apache.org
[00:47:13] 
[00:47:13] Quiet mode.
[00:47:13]   ^-- Logging to file '/usr/local/apache-ignite-fabric-2.6.0-bin/work/log/ignite-f3712946.log'
[00:47:13]   ^-- Logging by 'Log4JLogger [quiet=true, config=/usr/local/apache-ignite-fabric-2.6.0-bin/config/ignite-log4j.xml]'
[00:47:13]   ^-- To see**FULL**console log here add -DIGNITE_QUIET=false or "-v" to ignite.{sh|bat}
[00:47:13] 
[00:47:13] OS: Linux 4.15.0-46-generic amd64
[00:47:13] VM information: Java(TM) SE Runtime Environment 1.8.0_192-ea-b04 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.192-b04
[00:47:13] Configured plugins:
[00:47:13]   ^-- Ignite Native I/O Plugin [Direct I/O] 
[00:47:13]   ^-- Copyright(C) Apache Software Foundation
[00:47:13] 
[00:47:13] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0]]
[00:47:22] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[00:47:22] Security status [authentication=off, tls/ssl=off]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/htrace%20dependency/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-rest-http/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-yarn/ignite-yarn-2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-zookeeper/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.helpers.NOPLoggerFactory]
[00:47:23] HADOOP_HOME is set to /usr/local/hadoop
[00:47:23] Resolved Hadoop classpath locations: /usr/local/hadoop/share/hadoop/common, /usr/local/hadoop/share/hadoop/hdfs, /usr/local/hadoop/share/hadoop/mapreduce
[00:47:26] Performance suggestions for grid  (fix if possible)
[00:47:26] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[00:47:26]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options)
[00:47:26]   ^-- Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options)
[00:47:26]   ^-- Disable processing of calls to System.gc() (add '-XX:+DisableExplicitGC' to JVM options)
[00:47:26]   ^-- Enable ATOMIC mode if not using transactions (set 'atomicityMode' to ATOMIC)
[00:47:26]   ^-- Disable fully synchronous writes (set 'writeSynchronizationMode' to PRIMARY_SYNC or FULL_ASYNC)
[00:47:26] Refer to this page for more performance suggestions: https://apacheignite.readme.io/docs/jvm-and-system-tuning
[00:47:26] 
[00:47:26] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[00:47:26] 
[00:47:26] Ignite node started OK (id=f3712946)
[00:47:26] Topology snapshot [ver=1, servers=1, clients=0, CPUs=8, offheap=1.6GB, heap=1.0GB]
[00:47:26]   ^-- Node [id=F3712946-0810-440F-A440-140FE4AB6FA7, clusterState=ACTIVE]
[00:47:26] Data Regions Configured:
[00:47:27]   ^-- default [initSize=256.0 MiB, maxSize=1.6 GiB, persistenceEnabled=false]
[00:47:35] New version is available at ignite.apache.org: 2.7.0
[2019-03-13 00:47:46,978][ERROR][igfs-igfs-ipc-#53][IgfsImpl] File info operation in DUAL mode failed [path=/output]
class org.apache.ignite.IgniteException: For input string: "30s"
	at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:100)
	at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.getWithMappedName(HadoopCachingFileSystemFactoryDelegate.java:53)
	at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:75)
	at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:43)
	at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.fileSystemForUser(HadoopIgfsSecondaryFileSystemDelegateImpl.java:517)
	at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.info(HadoopIgfsSecondaryFileSystemDelegateImpl.java:296)
	at org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.info(IgniteHadoopIgfsSecondaryFileSystem.java:240)
	at org.apache.ignite.internal.processors.igfs.IgfsImpl.resolveFileInfo(IgfsImpl.java:1600)
	at org.apache.ignite.internal.processors.igfs.IgfsImpl.access$800(IgfsImpl.java:110)
	at org.apache.ignite.internal.processors.igfs.IgfsImpl$6.call(IgfsImpl.java:524)
	at org.apache.ignite.internal.processors.igfs.IgfsImpl$6.call(IgfsImpl.java:517)
	at org.apache.ignite.internal.processors.igfs.IgfsImpl.safeOp(IgfsImpl.java:1756)
	at org.apache.ignite.internal.processors.igfs.IgfsImpl.info(IgfsImpl.java:517)
	at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:341)
	at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:332)
	at org.apache.ignite.igfs.IgfsUserContext.doAs(IgfsUserContext.java:54)
	at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.processPathControlRequest(IgfsIpcHandler.java:332)
	at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.execute(IgfsIpcHandler.java:241)
	at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.access$000(IgfsIpcHandler.java:57)
	at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$1.run(IgfsIpcHandler.java:167)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteCheckedException: For input string: "30s"
	at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7307)
	at org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:259)
	at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:171)
	at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
	at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.getValue(HadoopLazyConcurrentMap.java:191)
	at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:93)
	... 22 more
Caused by: java.lang.NumberFormatException: For input string: "30s"
	at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
	at java.lang.Long.parseLong(Long.java:589)
	at java.lang.Long.parseLong(Long.java:631)
	at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1538)
	at org.apache.hadoop.hdfs.DFSClient$Conf.<init>(DFSClient.java:430)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:540)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:524)
	at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
	at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:217)
	at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:214)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:214)
	at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.create(HadoopBasicFileSystemFactoryDelegate.java:117)
	at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.getWithMappedName(HadoopBasicFileSystemFactoryDelegate.java:95)
	at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.access$001(HadoopCachingFileSystemFactoryDelegate.java:32)
	at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate$1.createValue(HadoopCachingFileSystemFactoryDelegate.java:37)
	at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate$1.createValue(HadoopCachingFileSystemFactoryDelegate.java:35)
	at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.init(HadoopLazyConcurrentMap.java:173)
	at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.access$100(HadoopLazyConcurrentMap.java:154)
	at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:82)
	... 22 more

为了执行hadoop wordcount示例,我在hdfs中创建了一个名为name/user/input/的forlder,并在上面放了一个文本文件,然后用下面的命令执行wordcout示例:time hadoop--config/home/mehdi/ignite conf/ignite configs master/igfs hadoop fs cache/ignite conf jar/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jarwordcount/用户/输入/输出

ukqbszuj

ukqbszuj1#

真管用!非常感谢。我用hadoop2.4.1替换了$ignite\u home/lib路径中的hadoop3.2.0库,并再次运行示例。它运行时没有任何错误或异常。!伟大的。

agyaoht7

agyaoht72#

检查一下,如果有 dfs.client.datanode-restart.timeout 在配置中的任何位置指定的属性。设定为 30s ,但解析器需要一个数字作为值。如果在任何地方都没有指定,请尝试将其设置为 30 明确地。
问题来自这样一个事实,即ignite在内部使用hadoop2.4.1。配置分析器不支持该版本的时间单位。您尝试运行为hadoop3.2.0编译的示例。我建议切换到hadoop2.4.1以避免其他兼容性问题。

相关问题