Kubeadm 是一个官方推荐部署kubernetes工具,降低了部署难度,提高效率它提供了 kubeadm init 以及 kubeadm join 这两个命令作为快速创建 kubernetes 集群的最佳实践。kubeadm 通过执行必要的操作来启动和运行一个最小可用的集群。它被故意设计为只关心启动集群,而不是准备节点环境的工作。同样的,诸如安装各种各样的可有可无的插件,也不再它的负责范围
OS:CentOS 7.6 x86_64
Container runtime:Docker-ce 19.03
Kubernetes:1.17.0
IP地址主机名角色CPUMemory192.168.100.150mastermaster>=2c>=2G192.168.100.156node01node>=2c>=2G192.168.100.157node02node>=2c>=2G1、编辑Master和各node的/etc/hosts,使其能够使用主机名解析
192.168.100.150 master master 192.168.100.156 node01 node01 192.168.100.157 node02 node022、主机时间同步
$ systemctl enable chronyd.service $ systemctl status chronyd.service3、关闭防火墙和Selinux服务
$ systemctl stop firewalld && systemctl disable firewalld $ setenforce 0 $ vim /etc/selinux/config SELINUX=disabled4、禁用Swap虚拟内存
$ swapoff -a $ sed -i 's/.*swap.*/#&/' /etc/fstab官方安装教程
$ wget https://download.docker.com/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo $ yum install -y docker-ce $ systemctl start docker && systemctl enable docker配置docker镜像下载加速
vim /etc/docker/daemon.josn { "registry-mirrors": [ "https://registry.docker-cn.com" ] } $ systemctl daemon-reload && systemctl restart docker将桥接的IPv4流量传递到iptables的链
$ cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF $ sysctl --system由于网络原因,中国无法直接连接到google的网络,需要配置阿里云的yum源
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOFKubelet负责与其他节点集群通信,并进行本节点Pod和容器生命周期的管理。
温馨提示:如果yum安装提示找不到镜像之类的,请yum makecache更新下yum源
$ systemctl daemon-reload $ systemctl enable kubelet #master和node节点设置开机自启动kubelet–kubernetes-version 正在使用的Kubernetes程序组件的版本号,需要与kubelet 的版本号相同
–pod-network-cidr : Pod网络的地址范围,其值为CIDR格式的网络地址;使用flannel网络插件时,其默认地址为10.244.0.0/16
–service-cidr: Service 的网络地址范围,其值为CIDR格式的网络地址,默认地址为10.96.0.0/12
–apiserver-advertise-address : API server通告给其他组件的IP地址 ,一般应该为Master节点的 IP 地址,0.0.0.0 表示节点上所有可用的地址选择其中一个
使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。
#创建或修改/etc/docker/daemon.json: { "exec-opts": ["native.cgroupdriver=systemd"] } #重启docker: $ systemctl restart docker #验证 docker info | grep Cgroup Cgroup Driver: systemd上面的STATUS结果为"Healthy",表示组件处于健康状态,否则需要检查错误,如果排除不了问题,可以使用"kubeadm reset" 命令重置集群后重新初始化
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady master 10m v1.17.0此时的Master处于"NotReady"(未就绪),因为集群中尚未安装网络插件,部署完网络后会ready,下面部署flannel
下面看下集群的状态
$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 17m v1.17.0集群处于Ready状态,node节点可以加入集群中
执行完毕后稍等一会,在主节点上查看集群的状态,到这里我们一个最简单的包含最核心组件的集群搭建完毕!
$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 34m v1.17.0 node01 Ready <none> 6m14s v1.17.0 node02 Ready <none> 6m8s v1.17.0kubernetes部署dasoard v2.0.3
创建dashboard的yaml文件
$wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml修改部分配置文件内容
$ sed -i 's/k8s.gcr.io/loveone/g' kubernetes-dashboard.yaml $ sed -i '/targetPort:/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' kubernetes-dashboard.yaml部署dashboard
[root@master ~]# kubectl create -f kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created创建完成后,检查各服务运行状态
[root@master ~]# kubectl get deployment kubernetes-dashboard -n kube-system NAME READY UP-TO-DATE AVAILABLE AGE kubernetes-dashboard 1/1 1 1 89s [root@master ~]# kubectl get services -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 61m kubernetes-dashboard NodePort 10.102.234.209 <none> 443:30001/TCP 16m [root@master ~]# netstat -ntlp|grep 30001 tcp6 0 0 :::30001 :::* LISTEN 17306/kube-proxy使用Firefox浏览器输入Dashboard访问地址:https://192.168.100.156:30001
这里使用其他如chrome会提示安全问题无法连接!!!
查看访问Dashboard的token
[root@master ~]# kubectl create serviceaccount dashboard-admin -n kube-system serviceaccount/dashboard-admin created [root@master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created [root@master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') Name: dashboard-admin-token-9hglw Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: dashboard-admin kubernetes.io/service-account.uid: 30efdd50-92bd-11e9-91e3-000c296bd9bc Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tOWhnbHciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzBlZmRkNTAtOTJiZC0xMWU5LTkxZTMtMDAwYzI5NmJkOWJjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.Bg9FOIr6RkepjCFav8tbkbTALGEX7bZJMNOYMOrYhFPhnhCs1RSxop7pCGBtdjug_Zpsb9UJ1WNWTsCInUlMYtSHkbaqVLZQEdIgD6jGb177CxIZBcCuxmxxQm0JMJdYjc6Y_1wYSTJGHtmWOHa70pUEcKo9I0LonTUfHCZh5PgS3JrwiTrsqe1RGyz3Jz4p9EIVPfcxmKCowSuapinOTezAWK2XAUhk2h5utXgag6RRnrPcHtlncZzW5fMTSfdAZv5xlaI64AM__qiwOTqyK-14xkda5nbk9DGhN5UwhkHzyvU6ApGT7A9Tr3j3QkMov9gEyVIDbSbBaSj8xBt36Qdns-test-busybox.yaml
apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - image: busybox:1.28 #注意这个busybox的版本是个坑 command: - sleep - "3600" imagePullPolicy: IfNotPresent name: busybox restartPolicy: Alwaysnginx-deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service labels: app: nginx spec: ports: - port: 88 targetPort: 80 selector: app: nginx type: NodePort