org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode =NoNode for hbasehbaseid解决方案

    技术2026-01-21  7

    在使用MapReduce编程实现将Hadoop中的文件中的内容导入HBase时,出现错误如下:

    2020-07-04 19:46:02,665 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop/eclipse-workspace/map_e1 2020-07-04 19:46:02,673 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$20/77049026@79dce2fc 2020-07-04 19:46:02,695 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2020-07-04 19:46:02,701 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session 2020-07-04 19:46:02,812 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1731982cf8b000e, negotiated timeout = 90000 2020-07-04 19:46:02,819 WARN client.ConnectionImplementation: Retrieve cluster id failed java.util.concurrent.ExecutionException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/hbaseid at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) at org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId(ConnectionImplementation.java:574) at org.apache.hadoop.hbase.client.ConnectionImplementation.<init>(ConnectionImplementation.java:307) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$0(ConnectionFactory.java:230) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:347) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:228) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:128) at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.checkOutputSpecs(TableOutputFormat.java:182) at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:277) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:143)

    解决方法

    在主函数中加入下面语句:

    conf.set("fs.defaultFS","hdfs://localhost:9000"); conf.set("hbase.zookeeper.quorum","localhost"); conf.set("hbase.zookeeper.property.clientPort", "2181"); conf.set("zookeeper.znode.parent","/hbase/master" );

    同时在hbase-site.xml文件(我的是在目录/usr/local/hbase/conf中),添加如下语句: 这里hdfs的路径,“hbase.zookeeper.quorum”,“hbase.zookeeper.property.clientPort” 的值根据自己配置的实际情况而定,可以在日志中查看,也可进入zookeeper客户端查看(命令:bin/hbase zkcli)。

    Processed: 0.038, SQL: 9