我有一个用java7编译的maven项目,它使用springioc来测试hbase客户机jar的使用。
依赖关系如下:
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.0.0-cdh5.5.4</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>1.0.0-cdh5.5.4</version>
</dependency>
我有一个测试(使用junit)创建一个本地集群,创建一个表并加载一些数据,然后客户端连接到集群,执行查找并关闭集群,然后再次尝试启动集群以检查客户端的重新连接机制。
问题是,在集群关闭后的启动过程中,出现了一个异常,经过长时间的研究,似乎关闭过程没有成功完成。
任何帮助找出如何正确关闭集群的方法都会很好。
代码上下文:
@RunWith(SpringJUnit4ClassRunner.class)
public class TestHBaseUserOfflineReconnection
{
@Value("${userTableName}")
private static String userTableName = "TestTable";
@Autowired
@Qualifier("hbaseUserOfflineReconnectDao")
private UserDao userOfflineReconnectDao;
@Autowired
private DemographicBenchmark benchmark;
private Table htable;
private static LocalHBaseCluster hbaseCluster;
private static MiniZooKeeperCluster zooKeeperCluster;
private static Configuration configuration;
static Connection conn = null;
@BeforeClass
public static void setup() throws IOException, InterruptedException
{
// delete the default local folder for that HBase stores its files
String userName = System.getProperty("user.name");
FileUtils.deleteDirectory(new File("/tmp/hbase-" + userName));
initHbase();
}
public static void initHbase() throws IOException, InterruptedException
{
configuration = HBaseConfiguration.create();
zooKeeperCluster = new MiniZooKeeperCluster(configuration);
zooKeeperCluster.setDefaultClientPort(2181);
zooKeeperCluster.startup(new File("target/zookeepr-" + System.currentTimeMillis()));
hbaseCluster = new LocalHBaseCluster(configuration, 1);
hbaseCluster.startup();
}
@Before
public void initeHTable() throws IOException
{
configuration.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
conn = ConnectionFactory.createConnection(configuration);
HTableDescriptor table = new HTableDescriptor(TableName.valueOf(userTableName));
table.addFamily(new HColumnDescriptor("cf"));
conn.getAdmin().createTable(table);
htable = conn.getTable(TableName.valueOf(userTableName));
}
public static void shutdown() throws IOException
{
hbaseCluster.shutdown();
hbaseCluster.waitOnMaster(0);
zooKeeperCluster.shutdown();
}
@Test
public void testHBaseReconnection() throws IOException, TkException, InterruptedException
{
// do some lookups with the client, and all goes well..
shutdown();
initHbase(); // HERE's I GET THE EXCEPTION
// some more code...
shutdown(); // after test finished, closing the cluster
}
}
我得到的例外是:
错误2016-07-13 16:48:03849[b.defaultrpcserver.handler=4,queue=1,port=46727]org.apache.hadoop.hbase.master.masterrpcservices:region server localhost,44545148417682471报告了一个致命错误:中止region server localhost,44545,1468417682471:未处理:区域服务器启动失败原因:java.io.ioexception:org.apache.hadoop.hbase.regionserver.hregionserver.convertshrowabletoioe(hregionserver)上的区域服务器启动失败。java:2827)在org.apache.hadoop.hbase.regionserver.hregionserver.handlereportfordutyresponse(hregionserver。java:1317)在org.apache.hadoop.hbase.regionserver.hregionserver.run(hregionserver。java:852)在java.lang.thread.run(线程。java:745)原因:org.apache.hadoop.metrics2.metricsexception:metrics source regionserver,sub=服务器已存在!在org.apache.hadoop.metrics2.lib.defaultmetricssystem.newsourcename(defaultmetricssystem。java:135)在org.apache.hadoop.metrics2.lib.defaultmetricssystem.sourcename(defaultmetricssystem。java:112)在org.apache.hadoop.metrics2.impl.metricssystemimpl.register(metricssystemimpl。java:228)在org.apache.hadoop.hbase.metrics.basesourceimpl.(basesourceimpl。java:75)位于org.apache.hadoop.hbase.regionserver.metricsregionserversourceimpl.(metricsregionserversourceimpl。java:66)位于org.apache.hadoop.hbase.regionserver.metricsregionserversourceimpl.(metricsregionserversourceimpl。java:58)在org.apache.hadoop.hbase.regionserver.metricsregionserversourcefactoryimpl.createserver(metricsregionserversourcefactoryimpl。java:46)在org.apache.hadoop.hbase.regionserver.metricsregionserver.(metricsregionserver。java:38)在org.apache.hadoop.hbase.regionserver.hregionserver.handlereportfordutyresponse(hregionserver。java:1301)
... 2个以上
暂无答案!
目前还没有任何答案,快来回答吧!