我正在尝试mapreduce与python的一致性http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
当我在hadoop中尝试mapreduce的工作时,我不断地收到一个“resources are low on nn”。我已经从hdfs中删除了文件,还将一个卷附加到云上的示例。我已经停用了安全模式,但问题是它会立即再次激活。
使用的文件系统大小可用使用率%
hdfs://localhost:9000 38.8克9.3克26.4克24%
我应该有足够的空间。。。
python文件已经在本地测试过,运行良好。事实上,这种方法以前在云计算上是有效的,但由于某些原因,现在不再有效了。
ubuntu@lesolab2:~$ hadoop jar /usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.7.2.jar -file mapper.py -mapper mapper.py -file reducer.py -reducer reducer.py -input /user/python/tweets/files/* -output /user/python/result
16/04/25 14:22:09 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead.
packageJobJar: [mapper.py, reducer.py] [] /tmp/streamjob5539719163695874514.jar tmpDir=null
16/04/25 14:22:10 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
16/04/25 14:22:10 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
16/04/25 14:22:10 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
16/04/25 14:22:11 INFO mapred.FileInputFormat: Total input paths to process : 21
16/04/25 14:22:11 INFO net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010
16/04/25 14:22:11 INFO mapreduce.JobSubmitter: number of splits:86
16/04/25 14:22:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1741181014_0001
16/04/25 14:22:12 INFO mapred.LocalDistributedCacheManager: Localized file:/home/ubuntu/mapper.py as file:/tmp/hadoop-ubuntu/mapred/local/1461594132014/mapper.py
16/04/25 14:22:12 INFO mapred.LocalDistributedCacheManager: Localized file:/home/ubuntu/reducer.py as file:/tmp/hadoop-ubuntu/mapred/local/1461594132015/reducer.py
16/04/25 14:22:12 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
16/04/25 14:22:12 INFO mapreduce.Job: Running job: job_local1741181014_0001
16/04/25 14:22:12 INFO mapred.LocalJobRunner: OutputCommitter set in config null
16/04/25 14:22:12 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter
16/04/25 14:22:12 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/04/25 14:22:12 INFO mapred.LocalJobRunner: Error cleaning up job:job_local1741181014_0001
16/04/25 14:22:12 WARN mapred.LocalJobRunner: job_local1741181014_0001
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /user/python/result/_temporary/0. Name node is in safe mode.
Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1327)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3893)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:983)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1043)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:305)
at org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter.java:131)
at org.apache.hadoop.mapred.OutputCommitter.setupJob(OutputCommitter.java:233)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:511)
16/04/25 14:22:13 INFO mapreduce.Job: Job job_local1741181014_0001 running in uber mode : false
16/04/25 14:22:13 INFO mapreduce.Job: map 0% reduce 0%
16/04/25 14:22:13 INFO mapreduce.Job: Job job_local1741181014_0001 failed with state FAILED due to: NA
16/04/25 14:22:13 INFO mapreduce.Job: Counters: 0
16/04/25 14:22:13 ERROR streaming.StreamJob: Job not successful!
Streaming Command Failed!
你知道为什么流媒体失败了吗?
暂无答案!
目前还没有任何答案,快来回答吧!