我正在用hadoop-2.7.3运行sqoop1.4。还使用mysql 5.7作为hive-2.1.1的元存储。当运行任何sqoop eval命令或hdfs操作时,它工作正常。但是,当从mysql导入数据到hdfs时,会产生以下错误。在某种程度上,它表现为:
“容器[pid=8424,containerid=container\u 1522677715514\u 0003\u 01\u000002]正在超出虚拟内存限制运行。当前使用量:使用109.8MB的1GB物理内存;已使用2.1 gb的2.1 gb虚拟内存。“杀死容器。”
但是我已经给我的虚拟机分配了8gb的内存,虚拟机硬盘上还有23GB的可用空间,我要导入的数据是3行的,比如:
mysql> select * from mytbl;
+----+----------+
| id | name |
+----+----------+
| 1 | Himanshu |
| 2 | Sekhar |
| 3 | Paul |
+----+----------+
sow它怎么能占用我的2.1GB虚拟内存?我怎样才能解决这个问题?
下面是从msqoop import命令生成的日志。
bigdata@bigdata:~$ sqoop import --connect jdbc:mysql://localhost/test --username root --password paul --table mytbl --target-dir /sqoop8
18/04/02 20:01:02 INFO sqoop.Sqoop: Running S**qoop version: 1.4.6**
18/04/02 20:01:02 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/04/02 20:01:02 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/04/02 20:01:02 INFO tool.CodeGenTool: Beginning code generation
18/04/02 20:01:03 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `mytbl` AS t LIMIT 1
18/04/02 20:01:03 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `mytbl` AS t LIMIT 1
18/04/02 20:01:03 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/bigdata/Work/hadoop-2.7.3
Note: /tmp/sqoop-bigdata/compile/72216fe6b30a45210956d41dc13e7906/mytbl.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
18/04/02 20:01:07 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-bigdata/compile/72216fe6b30a45210956d41dc13e7906/mytbl.jar
18/04/02 20:01:07 WARN manager.MySQLManager: It looks like you are importing from mysql.
18/04/02 20:01:07 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
18/04/02 20:01:07 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
18/04/02 20:01:07 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
18/04/02 20:01:07 INFO mapreduce.ImportJobBase: Beginning import of mytbl
18/04/02 20:01:08 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
18/04/02 20:01:11 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
18/04/02 20:01:11 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/04/02 20:01:19 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:609)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:577)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:573)
18/04/02 20:01:19 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:609)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:370)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:546)
18/04/02 20:01:20 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:609)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:577)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:573)
18/04/02 20:01:23 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:609)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:370)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:546)
18/04/02 20:01:23 WARN hdfs.DFSClient: DataStreamer Exception
java.nio.channels.ClosedByInterruptException
at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:478)
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:508)
18/04/02 20:01:24 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:609)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:577)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:573)
18/04/02 20:01:24 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:609)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:577)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:573)
18/04/02 20:01:24 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:609)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:577)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:573)
18/04/02 20:01:25 INFO db.DBInputFormat: Using read commited transaction isolation
18/04/02 20:01:25 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `mytbl`
18/04/02 20:01:26 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:609)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:370)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:546)
18/04/02 20:01:27 INFO mapreduce.JobSubmitter: number of splits:3
18/04/02 20:01:29 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1522677715514_0003
18/04/02 20:01:31 INFO impl.YarnClientImpl: Submitted application application_1522677715514_0003
18/04/02 20:01:31 INFO mapreduce.Job: The url to track the job: http://bigdata:8088/proxy/application_1522677715514_0003/
18/04/02 20:01:31 INFO mapreduce.Job: Running job: job_1522677715514_0003
18/04/02 20:01:49 INFO mapreduce.Job: Job job_1522677715514_0003 running in uber mode : false
18/04/02 20:01:49 INFO mapreduce.Job: map 0% reduce 0%
18/04/02 20:02:19 INFO mapreduce.Job: Task Id : attempt_1522677715514_0003_m_000001_0, Status : FAILED
Container [pid=8438,containerID=container_1522677715514_0003_01_000003] is running beyond virtual memory limits. Current usage: 110.6 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1522677715514_0003_01_000003 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 8547 8438 8438 8438 (java) 607 32 2246705152 27562 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/home/bigdata/Work/hadoop-2.7.3/HadoopTemp/nm-local-dir/usercache/bigdata/appcache/application_1522677715514_0003/container_1522677715514_0003_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 37965 attempt_1522677715514_0003_m_000001_0 3
|- 8438 8427 8438 8438 (bash) 0 1 13094912 750 /bin/bash -c /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/home/bigdata/Work/hadoop-2.7.3/HadoopTemp/nm-local-dir/usercache/bigdata/appcache/application_1522677715514_0003/container_1522677715514_0003_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 37965 attempt_1522677715514_0003_m_000001_0 3 1>/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000003/stdout 2>/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000003/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
18/04/02 20:02:40 INFO mapreduce.Job: Task Id : attempt_1522677715514_0003_m_000001_1, Status : FAILED
Container [pid=8656,containerID=container_1522677715514_0003_01_000006] is running beyond virtual memory limits. Current usage: 102.4 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1522677715514_0003_01_000006 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 8694 8656 8656 8656 (java) 520 17 2244608000 25476 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/home/bigdata/Work/hadoop-2.7.3/HadoopTemp/nm-local-dir/usercache/bigdata/appcache/application_1522677715514_0003/container_1522677715514_0003_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 37965 attempt_1522677715514_0003_m_000001_1 6
|- 8656 8654 8656 8656 (bash) 0 0 13094912 749 /bin/bash -c /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/home/bigdata/Work/hadoop-2.7.3/HadoopTemp/nm-local-dir/usercache/bigdata/appcache/application_1522677715514_0003/container_1522677715514_0003_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 37965 attempt_1522677715514_0003_m_000001_1 6 1>/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000006/stdout 2>/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000006/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
18/04/02 20:02:44 INFO mapreduce.Job: Task Id : attempt_1522677715514_0003_m_000000_1, Status : FAILED
Container [pid=8708,containerID=container_1522677715514_0003_01_000007] is running beyond virtual memory limits. Current usage: 104.1 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1522677715514_0003_01_000007 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 8746 8708 8708 8708 (java) 547 22 2244608000 25906 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/home/bigdata/Work/hadoop-2.7.3/HadoopTemp/nm-local-dir/usercache/bigdata/appcache/application_1522677715514_0003/container_1522677715514_0003_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 37965 attempt_1522677715514_0003_m_000000_1 7
|- 8708 8706 8708 8708 (bash) 1 0 13094912 745 /bin/bash -c /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/home/bigdata/Work/hadoop-2.7.3/HadoopTemp/nm-local-dir/usercache/bigdata/appcache/application_1522677715514_0003/container_1522677715514_0003_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 37965 attempt_1522677715514_0003_m_000000_1 7 1>/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000007/stdout 2>/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000007/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
18/04/02 20:02:48 INFO mapreduce.Job: Task Id : attempt_1522677715514_0003_m_000002_1, Status : FAILED
Container [pid=8760,containerID=container_1522677715514_0003_01_000008] is running beyond virtual memory limits. Current usage: 108.3 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1522677715514_0003_01_000008 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 8760 8758 8760 8760 (bash) 0 2 13094912 761 /bin/bash -c /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/home/bigdata/Work/hadoop-2.7.3/HadoopTemp/nm-local-dir/usercache/bigdata/appcache/application_1522677715514_0003/container_1522677715514_0003_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 37965 attempt_1522677715514_0003_m_000002_1 8 1>/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000008/stdout 2>/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000008/stderr
|- 8800 8760 8760 8760 (java) 610 28 2246705152 26964 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/home/bigdata/Work/hadoop-2.7.3/HadoopTemp/nm-local-dir/usercache/bigdata/appcache/application_1522677715514_0003/container_1522677715514_0003_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 37965 attempt_1522677715514_0003_m_000002_1 8
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
18/04/02 20:03:05 INFO mapreduce.Job: Task Id : attempt_1522677715514_0003_m_000001_2, Status : FAILED
Container [pid=8851,containerID=container_1522677715514_0003_01_000010] is running beyond virtual memory limits. Current usage: 108.1 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1522677715514_0003_01_000010 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 8851 8849 8851 8851 (bash) 2 0 13094912 749 /bin/bash -c /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/home/bigdata/Work/hadoop-2.7.3/HadoopTemp/nm-local-dir/usercache/bigdata/appcache/application_1522677715514_0003/container_1522677715514_0003_01_000010/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000010 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 37965 attempt_1522677715514_0003_m_000001_2 10 1>/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000010/stdout 2>/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000010/stderr
|- 8889 8851 8851 8851 (java) 582 27 2246705152 26928 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/home/bigdata/Work/hadoop-2.7.3/HadoopTemp/nm-local-dir/usercache/bigdata/appcache/application_1522677715514_0003/container_1522677715514_0003_01_000010/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000010 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 37965 attempt_1522677715514_0003_m_000001_2 10
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
18/04/02 20:03:09 INFO mapreduce.Job: Task Id : attempt_1522677715514_0003_m_000000_2, Status : FAILED
Container [pid=8915,containerID=container_1522677715514_0003_01_000011] is running beyond virtual memory limits. Current usage: 109.6 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1522677715514_0003_01_000011 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 8953 8915 8915 8915 (java) 599 30 2246705152 27300 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/home/bigdata/Work/hadoop-2.7.3/HadoopTemp/nm-local-dir/usercache/bigdata/appcache/application_1522677715514_0003/container_1522677715514_0003_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 37965 attempt_1522677715514_0003_m_000000_2 11
|- 8915 8913 8915 8915 (bash) 1 0 13094912 761 /bin/bash -c /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/home/bigdata/Work/hadoop-2.7.3/HadoopTemp/nm-local-dir/usercache/bigdata/appcache/application_1522677715514_0003/container_1522677715514_0003_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 37965 attempt_1522677715514_0003_m_000000_2 11 1>/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000011/stdout 2>/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000011/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
18/04/02 20:03:12 INFO mapreduce.Job: Task Id : attempt_1522677715514_0003_m_000002_2, Status : FAILED
Container [pid=8978,containerID=container_1522677715514_0003_01_000012] is running beyond virtual memory limits. Current usage: 108.3 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1522677715514_0003_01_000012 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 9016 8978 8978 8978 (java) 582 24 2246705152 26988 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/home/bigdata/Work/hadoop-2.7.3/HadoopTemp/nm-local-dir/usercache/bigdata/appcache/application_1522677715514_0003/container_1522677715514_0003_01_000012/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000012 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 37965 attempt_1522677715514_0003_m_000002_2 12
|- 8978 8976 8978 8978 (bash) 0 2 13094912 741 /bin/bash -c /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/home/bigdata/Work/hadoop-2.7.3/HadoopTemp/nm-local-dir/usercache/bigdata/appcache/application_1522677715514_0003/container_1522677715514_0003_01_000012/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000012 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 37965 attempt_1522677715514_0003_m_000002_2 12 1>/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000012/stdout 2>/home/bigdata/Work/hadoop-2.7.3/logs/userlogs/application_1522677715514_0003/container_1522677715514_0003_01_000012/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
18/04/02 20:03:28 INFO mapreduce.Job: map 100% reduce 0%
18/04/02 20:03:29 INFO mapreduce.Job: Job job_1522677715514_0003 failed with state FAILED due to: Task failed task_1522677715514_0003_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0
18/04/02 20:03:29 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=10
Killed map tasks=2
Launched map tasks=12
Other local map tasks=12
Total time spent by all maps in occupied slots (ms)=256493
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=256493
Total vcore-milliseconds taken by all map tasks=256493
Total megabyte-milliseconds taken by all map tasks=262648832
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
18/04/02 20:03:30 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
18/04/02 20:03:30 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 138.8135 seconds (0 bytes/sec)
18/04/02 20:03:30 INFO mapreduce.ImportJobBase: Retrieved 0 records.
18/04/02 20:03:30 ERROR tool.ImportTool: Error during import: Import job failed!
1条答案
按热度按时间zf2sa74q1#
终于解决了这个问题。向yarn-site.xml添加了以下属性。
并将以下属性Map到mapred-site.xml
谢谢你@satyapavan。。。这是一个很大的帮助。