Hadoop Hive Sqoop安装教程

    技术2022-08-01  118

    文章目录

    前提条件1 关闭防火墙&清空规则安装包说明3 JDK安装3.1 指定安装目录为/usr/java3.2配置环境变量 4 安装配置个人环境变量 5 部署5.1 添加本地信任5.2 配置文件5.2.1 core-site5.2.2 SNN5.2.3 DN 5.4 配置文件部署5.4.15.3.2 mapred5.3.4 yarn 5.5 启动hdfsyarn 测试pi hdfs启动bug

    前提条件

    1 关闭防火墙&清空规则

    #停止 systemctl stop firewalld

    #禁用 systemctl disable firewalld

    #清空防火墙的规则 iptables -F

    安装包说明

    下载地址

    http://archive.cloudera.com/cdh5/cdh/5/

    3 JDK安装

    3.1 指定安装目录为/usr/java

    [root@localhost SOFT]# mkdir /usr/java/

    tar开

    [root@localhost SOFT]# tar -xzvf jdk-8u181-linux-x64.tar.gz -C /usr/java/

    3.2配置环境变量

    JAVA_HOME echo "export JAVA_HOME=/usr/java/jdk1.8.0_181" >> /etc/profile PATH echo "export PATH=/usr/java/jdk1.8.0_181/bin:${PATH}" >> /etc/profile source 一下 source /etc/profile

    验证

    java -version

    4 安装

    创建用户

    [root@localhost java]# ll /home/ifengs total 0 [root@localhost java]#

    进入ifengs 并创建文件夹

    [ifengs@localhost ~]$ mkdir app software sourcecode log data tmp lib [ifengs@localhost ~]$ ll total 0 drwxrwxr-x. 2 ifengs ifengs 6 Jul 2 18:26 app drwxrwxr-x. 2 ifengs ifengs 6 Jul 2 18:26 data drwxrwxr-x. 2 ifengs ifengs 6 Jul 2 18:26 lib drwxrwxr-x. 2 ifengs ifengs 6 Jul 2 18:26 log drwxrwxr-x. 2 ifengs ifengs 6 Jul 2 18:26 software drwxrwxr-x. 2 ifengs ifengs 6 Jul 2 18:26 sourcecode drwxrwxr-x. 2 ifengs ifengs 6 Jul 2 18:26 tmp [ifengs@localhost ~]$ app 软件data 额外的数据lib 额外的第三方的jdbc…jarlog 日志sourcecode 源代码tmp 临时文件夹

    移动安装包到software下面

    mv hadoop... /home/ifengs/software/

    修改安装包所有者

    chown -R ifeng:ifeng /home/ifeng/software/*

    解压安装包

    tar -xzvf hadoop-2.6 -C ~/app/ ...同理

    创建软连接

    ln -s hadoop-2.6.0... hadoop ...

    配置个人环境变量

    配置 ,bashrc

    export HADOOP_HOME=/home/zull1/app/hadoop export Hive_HOME=/home/zull1/app/hive export PATH=${HADOOP_HOME}/bin:${Hive_HOME}/bin:$PATH source .bashrc

    5 部署

    5.1 添加本地信任

    $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys $ chmod 0600 ~/.ssh/authorized_keys

    测试 首先在/etc/hosts 中配置内网ip

    ssh zuoll1

    无需密码直接登录即可

    5.2 配置文件

    5.2.1 core-site

    <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://zuoll1:9000</value> </property> </configuration>

    5.2.2 SNN

    <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> <configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>zuoll1:9868</value> </property> </configuration> <configuration> <property> <name>dfs.namenode.secondary.https-address</name> <value>zuoll1:9869</value> </property> </configuration>

    5.2.3 DN

    <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://zuoll1:9000</value> </property> </configuration>

    5.4 配置文件部署

    cd ~/app/hadoop/etc/hadoop

    5.4.1

    slave的配置

    vi slaves # 清空后输入 zuoll1

    core-site

    vi core-site.xml <property> <name>fs.defaultFS</name> <value>hdfs://ifengs:9000</value> </property>

    修改 hadoop.tmp.dir

    vi core-site.xml <property> <name>hadoop.tmp.dir</name> <value>/home/zuoll1/tmp</value> </property>

    默认是在系统的tmp下 但是系统tmp 会半个月清理一次 hdfs

    <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>ifengs:9868</value> </property> <property> <name>dfs.namenode.secondary.https-address</name> <value>ifengs:9869</value> </property>

    snn

    5.3.2 mapred

    /etc/hadoop/mapred-site.xml

    <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>

    5.3.4 yarn

    <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property>

    8088容易被挖矿ST

    <property> <name>yarn.resourcemanager.webapp.address</name> <value>zuoll1:18088</value> </property>

    5.5 启动

    hdfs

    格式化hdfs

    hdfs namenode -format

    启动hdfs

    start-dfs.sh

    JAVA_HOME已经配置 启动还是报错

    apache hadoop 重大bug!!!

    去改下官方的配置文件

    vi ~/app/hadoop/etc/hadoop/hadoop-env.sh export JAVA_HOME=/usr/java/jdk!.8....

    再次启动dfs

    start-dfs.sh

    hdfs查看文件

    hdfs dfs -ls /

    hdfs创建文件

    hdfs dfs -mkdir /zuoll1

    yarn

    start-yarn.sh

    测试pi

    [ifeng@localhost hadoop]$ hadoop jar ./share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.16.2.jar pi 10 100

    hdfs启动bug

    1>&2

    Processed: 0.009, SQL: 10