错误:无法从远程hbase数据库获取表列表?

csga3l58  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(432)

但是我已经在下面添加了quickstart.cloudera的ip地址 C:\Windows\System32\drivers\etc 这条路。我用这个名字 quickstart.clouderahbase-site.xml 文件,我粘贴在我的eclipse项目中。但在本地系统中连接时,同样的代码也起作用。我正试着运行这个程序,但有些问题。

HBaseConfiguration hc = new HBaseConfiguration( new Configuration( ) );
hc.set("hbase.master", "quickstart.cloudera:60000");
hc.set("hbase.zookeeper.quorum", "quickstart.cloudera");
hc.set("hbase.zookeeper.property.clientPort","2181");
HBaseAdmin admin = new HBaseAdmin(hc);
HTableDescriptor[] tableDescriptor = admin.listTables();
for (int i=0; i<tableDescriptor.length;i++ )
{
System.out.println(tableDescriptor[i].getNameAsString());
}
}

我的输出:

15/10/01 15:30:55 INFO zookeeper.ClientCnxn: Opening socket connection to server quickstart.cloudera/192.168.0.106:2181. Will not attempt to authenticate using SASL (unknown error)
    15/10/01 15:30:55 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.0.105:62868, server: quickstart.cloudera/192.168.0.106:2181
    15/10/01 15:30:55 INFO zookeeper.ClientCnxn: Session establishment complete on server quickstart.cloudera/192.168.0.106:2181, sessionid = 0x150220e6706002c, negotiated timeout = 60000
    15/10/01 15:30:55 WARN util.DynamicClassLoader: Failed to identify the fs of dir hdfs://192.168.0.106:8020/hbase/lib, ignored
    java.io.IOException: No FileSystem for scheme: hdfs
    at        org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2138)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2145)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:80)
    at    org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2184)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2166)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:302)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
    at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:242)
    at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
    at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
    at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:86)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:850)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:635)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
    at java.lang.reflect.Constructor.newInstance(Unknown Source)
    at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
    at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:414)
    at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:407)
    at org.apache.hadoop.hbase.client.ConnectionManager.getConnectionInternal(ConnectionManager.java:285)
    at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:207)
    at HbaseList.main(HbaseList.java:22)
    15/10/01 15:31:53 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, started=57299 ms ago, cancelled=false, msg=
    15/10/01 15:32:14 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, started=78658 ms ago, cancelled=false, msg=
    15/10/01 15:32:35 INFO client.RpcRetryingCaller: Call exception, tries=12, retries=35, started=99756 ms ago, cancelled=false, msg=
oknrviil

oknrviil1#

尝试设置这些配置设置

Configuration conf= new Configuration();
conf.set("fs.defaultFS", "hdfs://" + host + ":"+port);
conf.set("fs.hdfs.impl", 
        org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
    );
conf.set("fs.file.impl",
        org.apache.hadoop.fs.LocalFileSystem.class.getName()
    );

HBaseConfiguration hc = new HBaseConfiguration( conf );

相关问题