一个典型的Kubernetes集群由多个工作节点(worker node)和一个集群控制平面(control plane ,即Master),以及一个集群状态存储系统(etcd)组成。其中,Master节点负责整个集群的管理工作,为集群提供管理接口,并监控和编排集群中的各个工作节点。各节点负责以Pod的形式运行容器,因此,各节点需要事先配置好容器运行依赖到的所有服务和资源;Node节点负责提供运行容器的各种依赖环境,并接受Master的管理。
Master节点:172.16.2.200 Node1节点:172.16.2.101 Node2节点:172.16.2.202
以下所有系统初始化操作三个节点都需要执行
1)同步时间
yum install -y ntpdate ntpdate ntp1.aliyun.com2)添加各节点的主机hosts记录
cat >> /etc/hosts << EOF 172.16.2.200 master 172.16.2.101 node1 172.16.2.202 node23)关闭防火墙与Seliunx
systemctl stop firewalld && systemctl disable firewalld setenforce 0 sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config4)禁用所有Swap设备
swapoff -a sed -i "/swap/d" /etc/fstab5)开启ipvs模型的proxy
modprobe br_netfilter cat > /etc/sysconfig/modules/ipvs.modules << EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules lsmod | grep -e ip_vs -e nf_conntrack_ipv46)设置iptables为1
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables1)安装必要的系统工具
yum install -y yum-utils device-mapper-persistent-data lvm22)添加软件源信息
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo3)安装Docker
yum makecache fast yum -y install docker-ce4)查看Docker版本
docker version5)启动Docker
systemctl enable docker && systemctl start docker1)添加Kubernetes源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF2)安装Kubernetes工具
yum install -y kubelet kubeadm kubectl3)启动服务
systemctl enable kubelet && systemctl start kubelet1)拉取镜像
kubeadm config images pull kubeadm config images list注意:kubernetes默认是去k8s.gcr.io上拉取镜像,如果本地网络环境不允许,可用下面的方式拉取镜像
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.17.3 docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.17.3 docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.17.3 docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.17.3 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd-amd64:3.4.3-0 docker pull coredns/coredns:1.6.5 docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.17.3 k8s.gcr.io/kube-proxy-amd64:v1.17.3 docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.17.3 k8s.gcr.io/kube-scheduler-amd64:v1.17.3 docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.17.3 k8s.gcr.io/kube-apiserver-amd64:v1.17.3 docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.17.3 k8s.gcr.io/kube-controller-manager-amd64:v1.17.3 docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.4.3-0 k8s.gcr.io/etcd-amd64:3.4.3-0 docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag docker.io/coredns/coredns:1.6.5 k8s.gcr.io/coredns:1.6.52)初始化集群
kubeadm init --kubernetes-version=v1.17.3 --pod-network-cidr=10.244.0.0/16 \ --service-cidr=10.96.0.0/12 --apiserver-advertise-address=0.0.0.0 \ --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU注意:请记住kubelet join的信息,这是其他节点加入集群的Token信息
3)创建配置文件
mkdir /root/.kube cp /etc/kubernetes/admin.conf /root/.kube/config4)部署网络插件
curl -O https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml kubectl create -f kube-flannel.yml1)输入Token信息
kubeadm join 172.16.2.200:6443 --token kwphmp.1gioryavnirpv5cv \ --discovery-token-ca-cert-hash sha256:8ea9606300de967ff51cc2b0b7798d5c7b94a4e9d7dc1daae8bbabf487768f8c如果Token过期或者丢失,可以使用如下命令重新生成Token信息
kubeadm token generate2)拷贝admin文件
scp /etc/kubernetes/admin.conf node01:/root/.kube/config3)查看集群信息
kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 27h v1.18.5 node1 Ready <none> 26h v1.18.5 node2 Ready <none> 26h v1.18.5 kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-66bff467f8-h7rjz 1/1 Running 0 27h coredns-66bff467f8-jvqf8 1/1 Running 0 27h etcd-master 1/1 Running 1 27h kube-apiserver-master 1/1 Running 1 27h kube-controller-manager-master 1/1 Running 1 27h kube-flannel-ds-amd64-26qv9 1/1 Running 0 26h kube-flannel-ds-amd64-fzxxc 1/1 Running 0 26h kube-flannel-ds-amd64-twv6z 1/1 Running 0 26h kube-proxy-255z5 1/1 Running 1 27h kube-proxy-8mh7c 1/1 Running 0 26h kube-proxy-gsbn6 1/1 Running 0 26h kube-scheduler-master 1/1 Running 1 27h