centos7 zookeeper3.4.14 hadoop-2.7.7 hbase-1.1.3 单机环境搭建

    技术2022-07-11  93

     

    Linux环境

    虚拟机一台:192.168.100.129

    虚拟机安装centos7,我使用的是NAT联网模式 ,设置的本地ip

     

    修改主机名

    hostnamectl set-hostname centos7_master 这个名字你自定义,只要保证后续 /etc/hosts文件和 hbase配置文件中保持一致就可以

    配置下免密登录

    ssh-keygen -t rsa -P '' 狂摁回车键,然后复制生成的秘钥文件到authorized_keys

    cp /root/.ssh/id_rsa.pub  authorized_keys  

    解压 zookeeper3.4.14.tar.gz hadoop-2.7.7.tar.gz hbase-1.1.3.tar.gz至 /usr/local/目录下,mv 指令 修改文件夹名称

     

    配置下环境变量:jdk环境这个自己安装下吧

    #jdk环境配置 export JAVA_HOME=/usr/local/environment/java/jdk1.8 export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$PATH:$JAVA_HOME/bin

    #hadoop环境 export HADOOP_HOME=/usr/local/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH export HADOOP_CLASSPATH=/software/hadoop/lib

    #hbase环境

    export HBASE_HOME=/usr/local/hbase export HBASE_CONF_DIR=/usr/local/hbase/conf export PATH=$PATH:$HBASE_HOME/bin

     

     

    在 /usr/local/zookeeper/conf 文件夹下 复制  zoo_sample.cfg ,cp  zoo_sample.cfg zoo.cfg 后 修改  zoo.cfg文件,zookeeper就搭建好了

    进入 /usr/local/hadoop/etc/hadoop 文件夹,这几个文件要修改

    core-site.xml

    <configuration>

            <property>                 <name>fs.defaultFS</name>                 <value>hdfs://localhost:9000</value>         </property>     <!--指定tmp目录,防止默认的tmp目录被删除-->          <property>                 <name>hadoop.tmp.dir</name>                 <value>/usr/local/hadoop/hadoopTmpDir</value>         </property> </configuration>

    hfds-site.xml

    <configuration>

    <property>    <name>dfs.name.dir</name>    <value>/usr/local/hadoop/dfs/name</value>    <description>Path on the local filesystem where theNameNode stores the namespace and transactions logs persistently.</description> </property> <property>    <name>dfs.data.dir</name>    <value>/usr/local/hadoop/dfs/data</value>    <description>Comma separated list of paths on the localfilesystem of a DataNode where it should store its blocks.</description> </property> <property>    <name>dfs.replication</name>    <value>1</value> </property> <property>    <name>dfs.permissions</name>    <value>false</value>    <description>need not permissions</description> </property>

    </configuration>

    hadoop-env.sh 添加

    export JAVA_HOME=/usr/local/environment/java/jdk1.8 export HADOOP_HOME=/usr/local/hadoop

     

     mapred-site.xml

    <configuration> <property>   <name>mapred.job.tracker</name>   <value>localhost:54311</value> </property> <property>   <name>yarn.app.mapreduce.am.env</name>   <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value> </property> <property>   <name>mapreduce.map.env</name>   <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value> </property> <property>   <name>mapreduce.reduce.env</name>   <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value> </property>

    </configuration>

    yarn-site.xml 

    <configuration>

    <!-- Site specific YARN configuration properties --> <property>         <name>yarn.resourcemanager.resource-tracker.address</name>         <value>localhost:8025</value> </property> <property>         <name>yarn.resourcemanager.scheduler.address</name>         <value>localhost:8030</value> </property> <property>         <name>yarn.resourcemanager.address</name>         <value>localhost:8050</value> </property>

    </configuration>

    vim hadoop/sbin/start-dfs.sh 文件开头添加

    HDFS_DATANODE_USER=root

    HDFS_DATANODE_SECURE_USER=hdfs

    HDFS_NAMENODE_USER=root

    HDFS_SECONDARYNAMENODE_USER=root

    vim hadoop/sbin/stop-dfs.sh 文件开头添加

    HDFS_DATANODE_USER=root

    HDFS_DATANODE_SECURE_USER=hdfs

    HDFS_NAMENODE_USER=root

    HDFS_SECONDARYNAMENODE_USER=root

    vim hadoop/sbin/start-yarn.sh 文件开头添加

     

    YARN_RESOURCEMANAGER_USER=root

    HADOOP_SECURE_DN_USER=yarn

    YARN_NODEMANAGER_USER=root

    vim /hadoop/sbin/stop-yarn.sh 文件开头添加

    YARN_RESOURCEMANAGER_USER=root

    HADOOP_SECURE_DN_USER=yarn

    YARN_NODEMANAGER_USER=root

     

     

    以上,hadoop基本上就搞完了

    进入 /usr/local/hbase/conf 目录

    cp xxx/zookeeper/conf/zoo.cfg .  复制zookeeper的 zoo.cfg文件到当前目录

    hbase-env.sh 添加

    export JAVA_HOME=/usr/local/environment/java/jdk1.8 export HBASE_CLASSPATH=/usr/local/hbase/conf export HBASE_PID_DIR=/var/hadoop/pids export HBASE_MANAGES_ZK=false #不使用HBase自带的zookeeper

    hbase-site.xml

    <configuration>         <property>                 <name>hbase.rootdir</name>                 <value>hdfs://localhost:9000/hbase</value>         </property>         <property>                 <name>hbase.zookeeper.quorum</name>                 <value>localhost</value>         </property>         <property>                 <name>hbase.zookeeper.property.cientPort</name>                 <value>2181</value>         </property>         <property>                 <name>hbase.tmp.dir</name>                 <value>/var/hbase/</value>         </property>         <property>                 <name>hbase.master</name>                 <value>localhost:60000</value>         </property>         <property>                 <name>hbase.cluster.distributed</name>                 <value>true</value>         </property>         <property>                 <name>hbase.zookeeper.property.dataDir</name>                 <value>/usr/local/hbase/zookeeper</value>         </property>         <property>                 <name>hbase.unsafe.stream.capability.enforce</name>                 <value>false</value>         </property> </configuration>  

    本地测试下吧

    public static Connection initHbase() throws Exception{ Configuration configuration = HBaseConfiguration.create(); configuration.set("hbase.zookeeper.property.clientPort","2181"); configuration.set("hbase.zookeeper.quorum","192.168.110.129"); configuration.set("hbase.master","192.168.110.129:60000"); Connection connection = ConnectionFactory.createConnection(configuration); return connection; } public static void createTable(String tableName,String[] cols) throws Exception{ TableName tableName1 = TableName.valueOf(tableName); admin = initHbase().getAdmin(); if(admin.tableExists(tableName1)){ }else { HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName); for(String col:cols){ HColumnDescriptor hColumnDescriptor = new HColumnDescriptor(col); hTableDescriptor.addFamily(hColumnDescriptor); } } } public static void insertData(String tableName, User user) throws Exception{ TableName tablename1 = TableName.valueOf(tableName); Put put = new Put(("user"+user.getId()).getBytes()); //参数:1列族名 2列名 3值 put.addColumn("information".getBytes(),"username".getBytes(),user.getUsername().getBytes()); put.addColumn("information".getBytes(),"age".getBytes(),user.getAge().getBytes()); put.addColumn("information".getBytes(),"gender".getBytes(),user.getGender().getBytes()); put.addColumn("information".getBytes(),"phone".getBytes(),user.getPhone().getBytes()); put.addColumn("information".getBytes(),"email".getBytes(),user.getEmail().getBytes()); Table table = initHbase().getTable(tablename1); table.put(put); } public static List<User> getAllData(String tableName)throws Exception{ Table table = null; List<User> list = new ArrayList<User>(); try { table = initHbase().getTable(TableName.valueOf(tableName)); ResultScanner results = table.getScanner(new Scan()); User user = null; for (Result result : results){ String id = new String(result.getRow()); System.out.println("用户名:" + new String(result.getRow())); user = new User(); for(Cell cell : result.rawCells()){ String row = Bytes.toString(cell.getRowArray(), cell.getRowOffset(), cell.getRowLength()); //String family = Bytes.toString(cell.getFamilyArray(),cell.getFamilyOffset(),cell.getFamilyLength()); String colName = Bytes.toString(cell.getQualifierArray(),cell.getQualifierOffset(),cell.getQualifierLength()); String value = Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()); user.setId(row); if(colName.equals("username")){ user.setUsername(value); } if(colName.equals("age")){ user.setAge(value); } if (colName.equals("gender")){ user.setGender(value); } if (colName.equals("phone")){ user.setPhone(value); } if (colName.equals("email")){ user.setEmail(value); } } list.add(user); } } catch (IOException e) { e.printStackTrace(); } return list; } public static void main(String[] args) { try { createTable("user_table", new String[] { "information", "contact" }); User user = new User("001", "xiaoming", "123456", "man", "20", "13355550021", "1232821@csdn.com"); insertData("user_table", user); User user2 = new User("002", "xiaohong", "654321", "female", "18", "18757912212", "214214@csdn.com"); insertData("user_table", user2); List<User> list = getAllData("user_table"); System.out.println("--------------------插入两条数据后--------------------"); for (User user3 : list){ System.out.println(user3.toString()); } } catch (Exception e) { e.printStackTrace(); } }

     

    参考: https://blog.csdn.net/qq_39680564/article/details/90673700 

    如果报 org.apache.hadoop.hbase.client.RetriesExhaustedException  参考 :https://blog.csdn.net/chen798213337/article/details/51957693

     

    Processed: 0.012, SQL: 9