kubeadm for beginners to deploy kubernetes cluster 1.21.0

1. Basic environmental information

  • centos7.9

  • kubernetes 1.21

  • network: Calico

  • Underlying driver: Docker

1.1 Host hardware configuration instructions

CPU Memory IP Role Hostname
2C 4G 192.168.192.11 master master01
2C 4G 192.168.192.12 worker(node) worker01
2C 4G 192.168.192.13 worker(node) worker02

1.2 Host configuration (if it is a local vm, be sure to change the IP to static)

The master node, named master01

# hostnamectl set-hostname master01

#worker01 node, named worker01

# hostnamectl set-hostname worker01

worker02 node, named worker02

# hostnamectl set-hostname worker02

1.3 Host name and IP address resolution

# cat >> /etc/hosts <<EOF
192.168.192.11 master01
192.168.192.12 worker01
192.168.192.13 worker02
EOF

1.4 Firewall Configuration

Turn off the existing firewall firewalld

# systemctl disable firewalld --now
# firewall-cmd --state

1.5 SELINUX configuration

//Temporarily closed
# setenforce 0
//Permanently disable
# sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

1.6 Time synchronization configuration

# yum install -y chrony
# systemctl enable chronyd --now
# chronyc sources
//forced synchronization once
# chronyc -a makestep

1.7 Upgrade the operating system kernel

# uname -r
3.10.0-957.el7.x86_64
//Import elrepo gpg key
# rpm //import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
//Install elrepo YUM source repository
# yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
//Install the kernel-ml version, ml is the long-term stable version, lt is the long-term maintenance version
# yum //enablerepo="elrepo-kernel" -y install kernel-ml.x86_64
//Set grub2 default boot to 0
# grub2-set-default 0
//Regenerate the grub2 boot file
# grub2-mkconfig -o /boot/grub2/grub.cfg
//After the update, you need to restart to use the upgraded kernel to take effect.
# reboot
// After restarting, you need to verify whether the kernel is the updated version
# uname -r
6.3.3-1.el7.elrepo.x86_64

1.8 Configure kernel forwarding and bridge filtering

//Load the br_netfilter module
# modprobe br_netfilter
//Add bridge filtering and kernel forwarding configuration files
# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
// take effect
# sysctl --system
//Open port forwarding
# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
// take effect
# sysctl -p

1.9 Install ipset and ipvsadm

Install ipset and ipvsadm

//1.7 After the system restarts, the local image file needs to be mounted, otherwise an error will be reported
# yum -y install ipset ipvsadm

Configure the ipvsadm module loading method and add the modules that need to be loaded

# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe --ip_vs
modprobe --ip_vs_rr
modprobe --ip_vs_wrr
modprobe --ip_vs_sh
modprobe --nf_conntrack
EOF

Authorize, run, check whether it is loaded

# chmod 755 /etc/sysconfig/modules/ipvs.modules & amp; & amp; bash
# /etc/sysconfig/modules/ipvs.modules & amp; & amp; lsmod | grep -e ip_vs -e nf_conntrack

1.10 Close SWAP partition

//temporary
# swapoff -a //useful
//permanent
# sed -ri 's/.*swap.*/# & amp;/' /etc/fstab

1.11 Docker preparation

Use the Alibaba Cloud open source software mirror site.

# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

View installable versions

# yum list docker-ce.x86_64 //showduplicates | sort -r

Install container-selinux

# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# yum install epel-release -y
# yum makecache
# yum install container-selinux -y

Install the specified version and set startup and boot self-start

# yum -y install --setopt=obsoletes=0 docker-ce-20.10.9-3.el7
# systemctl enable docker; systemctl start docker

Modify the cgroup method

Add the following content in /etc/docker/daemon.json
# cat > /etc/docker/daemon.json <<EOF
{<!-- -->
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

restart docker

# systemctl restart docker

2. Kubernetes 1.21.0 cluster deployment

2.1 Cluster software and version description

Description kubeadm kubelet kubectl
version 1.21.0 1.21.0 1.21.0
Installation location All hosts in the cluster All hosts in the cluster All hosts in the cluster
Function Initialize the cluster, manage the cluster, etc. Used to receive api-server instructions and manage the pod life cycle Cluster application command line management tool

2.2 kubernetes YUM source preparation

Configure the kubernetes warehouse (Aliyun YUM source)

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * elrepo: mirrors.tuna.tsinghua.edu.cn
 * epel: mirrors.bfsu.edu.cn
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
repo id repo name status
base/7/x86_64 CentOS-7 - Base - mirrors.aliyun.com 10,072
docker-ce-stable/7/x86_64 Docker CE Stable - x86_64 241
elrepo ELRepo.org Community Enterprise Linux Repository - el7 147
epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 13,785
extras/7/x86_64 CentOS-7 - Extras - mirrors.aliyun.com 515
kubernetes kubernetes 968
!media system 4,070
updates/7/x86_64 CentOS-7 - Updates - mirrors.aliyun.com 4,954
repolist: 34,752

Note: kubernetes uses the source of RHEL7, which is common to 8

2.3 Cluster software installation

//View the specified version
# yum list kubeadm.x86_64 //showduplicates | sort -r
# yum list kubelet.x86_64 //showduplicates | sort -r
# yum list kubectl.x86_64 //showduplicates | sort -r
//Install the specified version
# yum -y install --setopt=obsoletes=0 kubeadm-1.21.0-0 kubelet-1.21.0-0 kubectl-1.21.0-0

2.4 Configure kubelet

In order to achieve the consistency between the cgroupdriver used by docker and the cgroup used by kubelet, it is recommended to modify the following file content

# cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF
//Set kubelet to start automatically after booting. Since no configuration file is generated, it will start automatically after cluster initialization
# systemctl status kubelet
# systemctl enable kubelet --now

2.5 Cluster mirror preparation

# kubeadm config images list --kubernetes-version=v1.21.0
k8s.gcr.io/kube-apiserver:v1.21.0
k8s.gcr.io/kube-controller-manager:v1.21.0
k8s.gcr.io/kube-scheduler:v1.21.0
k8s.gcr.io/kube-proxy:v1.21.0
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns/coredns:v1.8.0

Write a script to download the image (you cannot use docker to download the k8s image, use the Alibaba Cloud image service)

# cat > image_download.sh <<EOF
#!/bin/bash
images_list='
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.0
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0'
for i in \$images_list
do
docker pull \$i
done
docker save -o k8s-1-21-0.tar \$images_list
EOF

implement:

# chmod +x image_download.sh & amp; & amp; sh image_download.sh

If it fails:

# chmod +x image_download.sh & amp; & amp; sh image_download.sh
Error response from daemon: Get "https://k8s.gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Reference: https://www.cnblogs.com/yaok430/p/13806650.html Re-tag

The above operations need to be run on all nodes (master + node).

2.6 Cluster initialization (master execution)

# kubeadm init --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.18.0.0/16 --service-cidr =10.28.0.0/16 --apiserver-advertise-address=192.168.192.11
//192.168.192.11 master host ip address
//report error:
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING Hostname]: hostname "master01" could not be reached
        [WARNING Hostname]: hostname "master01": lookup master01 on 192.168.192.254:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.0: output: Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/ google_containers/coredns/coredns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `//ignore-preflight-errors=...`
To see the stack trace of this error execute with //v=5 or higher

//solve:
//Modify the docker label
# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.0

Execute the initialization command again and output the following content as success, must be reserved for subsequent operations.

Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.192.11:6443 //token 7s3jkc.6jx5gq6jfkz5kyuz \
        //discovery-token-ca-cert-hash sha256:4e28c36ea2ed05ec8c83e4aca05fb370c188cf49ed533c30f277d3f999fa623e

2.7 Cluster application client management cluster file preparation

# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

# export KUBECONFIG=/etc/kubernetes/admin.conf
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 NotReady control-plane, master 2m7s v1.21.0
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-57d4cbf879-hqs5v 0/1 Pending 0 2m10s
coredns-57d4cbf879-n8l5b 0/1 Pending 0 2m10s
etcd-master01 1/1 Running 0 2m24s
kube-apiserver-master01 1/1 Running 0 2m24s
kube-controller-manager-master01 1/1 Running 0 2m24s
kube-proxy-qft5v 1/1 Running 0 2m10s
kube-scheduler-master01 1/1 Running 0 2m24s

2.8 Cluster network preparation

Use calico to deploy cluster network
Installation reference URL: https://projectcalico.docs.tigera.io/about/about-calico

Install calico network

# curl
https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O

After downloading, you need to modify the definition Pod network (CALICO_IPV4POOL_CIDR), which is the same as specified by //podnetwork-cidr in the previous kubeadm init

# vi calico.yaml
?…
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
# change into:
- name: CALICO_IPV4POOL_CIDR
  value: "10.18.0.0/16"
?…

deploy

kubectl apply -f calico.yaml

Check the status of coredns in the kube-system namespace. If it is in the Running state, it indicates that the networking is successful.

# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-68d86f8988-czrkt 1/1 Running 0 71s
calico-node-bpfd6 1/1 Running 0 71s
coredns-57d4cbf879-hqs5v 1/1 Running 0 11m
coredns-57d4cbf879-n8l5b 1/1 Running 0 11m
etcd-master01 1/1 Running 0 11m
kube-apiserver-master01 1/1 Running 0 11m
kube-controller-manager-master01 1/1 Running 0 11m
kube-proxy-qft5v 1/1 Running 0 11m
kube-scheduler-master01 1/1 Running 0 11m

2.9 Cluster worker node addition (work node execution)

Due to the slow download of the container image, an error may be reported. The main error is that the cni (cluster network plug-in) is not ready. If there is a network, please wait patiently.

# kubeadm join 192.168.192.11:6443 --token 7s3jkc.6jx5gq6jfkz5kyuz --discovery-token-ca-cert-hash sha256:4e28c36ea2ed05ec8c83e4aca05fb370c188cf49ed533c 30f277d3f999fa623e
//192.168.192.11 IP of host master
//If you forget the join command, it can be regenerated on the master node
# kubeadm token create --print-join-command

2.10 Verify cluster availability

//Check the running status of the kubernetes cluster node
kubectl get node
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane, master 24m v1.21.0
worker01 Ready <none> 2m38s v1.21.0
worker02 Ready <none> 19s v1.21.0

//Check the running status of kubernetes cluster pod
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-68d86f8988-czrkt 1/1 Running 0 14m
calico-node-8cfsd 1/1 Running 0 3m17s
calico-node-bpfd6 1/1 Running 0 14m
calico-node-gfzbg 1/1 Running 0 58s
coredns-57d4cbf879-hqs5v 1/1 Running 0 25m
coredns-57d4cbf879-n8l5b 1/1 Running 0 25m
etcd-master01 1/1 Running 0 25m
kube-apiserver-master01 1/1 Running 0 25m
kube-controller-manager-master01 1/1 Running 0 25m
kube-proxy-2chv2 1/1 Running 0 3m17s
kube-proxy-lsshg 1/1 Running 0 58s
kube-proxy-qft5v 1/1 Running 0 25m
kube-scheduler-master01 1/1 Running 0 25m

//View cluster status:
# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {<!-- -->"health":"true"}