我正在测试中启动minidfscluster(我的依赖项是2.0.0-cdh4.5.0)。我用一个简单的程序来启动它:
File baseDir = new File("./target/hdfs/" + RunWithHadoopCluster.class.getSimpleName()).getAbsoluteFile();
FileUtil.fullyDelete(baseDir);
Configuration conf = new Configuration();
conf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, baseDir.getAbsolutePath());
MiniDFSCluster.Builder builder = new MiniDFSCluster.Builder(conf);
MiniDFSCluster hdfsCluster = builder.build();
String hdfsURI = "hdfs://localhost:"+ hdfsCluster.getNameNodePort() + "/";
并不断得到以下错误。
12:02:15.994 [main] WARN o.a.h.metrics2.impl.MetricsConfig - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
12:02:16.047 [main] INFO o.a.h.m.impl.MetricsSystemImpl - Scheduled snapshot period at 10 second(s).
12:02:16.047 [main] INFO o.a.h.m.impl.MetricsSystemImpl - NameNode metrics system started
java.lang.IncompatibleClassChangeError: Implementing class
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at org.apache.hadoop.metrics2.source.JvmMetrics.getEventCounters(JvmMetrics.java:162)
at org.apache.hadoop.metrics2.source.JvmMetrics.getMetrics(JvmMetrics.java:96)
at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:194)
at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:171)
at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:150)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:57)
at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:220)
at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:95)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:244)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:222)
at org.apache.hadoop.metrics2.source.JvmMetrics.create(JvmMetrics.java:80)
at org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics.create(NameNodeMetrics.java:94)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initMetrics(NameNode.java:278)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:436)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:613)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:598)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:879)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:770)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:628)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:323)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:113)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:305)
为什么会这样?
2条答案
按热度按时间tct7dpnv1#
升级到slf4j1.7.6应该可以解决这个问题(我们使用了1.7.7),因为log4j-over-slf4jv1.7.5缺少appenderskeleton。
很可能某个类正在调用appenderskeleton的某个地方使用log4j,但是通过slf4j重定向log4j的桥却缺少了这一点,并且它与海报显示的堆栈跟踪一起崩溃了。上的发行说明http://www.slf4j.org/news.html 请说明1.7.6中已对此进行了说明。
登录Yarn,我们看到的问题:https://issues.apache.org/jira/browse/yarn-2875.
pxy2qtax2#
仔细检查你的依赖关系。此错误表示类路径上存在不兼容的日志jar版本。我遇到了一个类似的问题,不得不排除另一个第三方库引入的log4j-over-slf4j依赖关系。