kubeadm one-click deployment of k8s1.25.4 high-availability cluster–Update (2023-09-15)

Original address: kubeadm one-click deployment of k8s1.25.4 high-availability cluster

kubeadm one-click deployment of k8s1.25.4 high-availability cluster–Update (2023-09-15)

Configuration list

CPU name

IP address

components

node1

192.168.111.130

etcd,apiserver,controller-manager,scheduler

node2

192.168.111.131

etcd,apiserver,controller-manager,scheduler

node3

192.168.111.133

etcd,apiserver,controller-manager,scheduler

apiserver.cluster.local

192.168.111.130

vip

Local resources are limited, so we tested three master hosts. Worker nodes are easier to add than masters.

Using local DNS to resolve apiserver.cluster.local can replace complex and unreliable load balancing tools. If there is a problem, change the resolution master IP of apiserver.cluster.local to switch, or switch automatically based on nginx.

Operating system and software version information

There are two test systems

  • centos:7

  • openruler:22.03

Software version

  • kernel:4.1.9

  • kubelet:1.25.4

  • kubeadm:1.25.4

  • kubectl:1.25.4

  • cri-tools:1.26.0

  • socat:1.7.3.2

  • containerd:1.6.10

  • nerdctl:1.5.0

  • etcd:3.5.6

  • cni-plugins:1.1.1

  • crictl:1.25.0

The scripts used–are compatible with centos7 and open euler

  • 01-rhel_init.sh is used to initialize server operations and configure to check whether the basic conditions for deploying K8S are met.

  • 02-containerd-install.sh is used to install the containerd container runtime.

  • 03-kubeadm-mater1-init.sh is used to install kubeadm and other services, initialize the master1 node, and create a token for registration of other nodes.

  • 04-kubeadm-mater-install.sh is used to install kubeadm and other services on other nodes and register with master1.

  • copy-certs.sh is used to distribute CA certificates, etc. from the master1 node to other master nodes.

Deployment Support

Support online installation

Support offline installation

Currently, only the X86 version of the software package has been produced.

For the time being, only centos7 and oepneuler22.0.3 have been tested.

Follow-up plan

  • Support arm64

  • Test domestic system

  • Release an ansible version to make deployment more elegant

Deployment Process

  • node1 initializes the cluster master node

  • Copy the certificate of node1 to node2 and node3

  • Join the cluster

Preparation

  • 1. Upload the software package

  • 2. Configure the password-free operation of host node1 and other nodes by yourself. No password is required for configuration.

  • 3. Configure hosts

  • 4. Modify the IP address of the kubeadm-config.yaml file to your current host IP address.

  • 5. Leave the rest to me

Start deployment

The node1 static resource directory of centos7 is as follows:

[root@node1 ~]# tree .
.
├── 01-rhel_init.sh
├── 02-containerd-install.sh
├── 03-kubeadm-mater1-init.sh
├── 04-kubeadm-mater-install.sh
├── bin
│ ├── etcdctl
│ ├── nerdctl
│ └── runc
├── conf
│ ├── containerd.service
│ ├── docker.service
│ ├── k8s.conf
│ └── sysctl.conf
├── copy-certs.sh
├── images_v1.25.4.tar
├── k8s_init.log
├── kernel
│ └── kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
├── kubeadm-config.yaml
├── kube-flannel.yml
├── packages
│ ├── cni-plugins-linux-amd64-v1.1.1.tgz
│ ├── containerd-1.6.10-linux-amd64.tar.gz
│ ├── cri-containerd-1.6.10-linux-amd64.tar.gz
│ ├──crictl-v1.25.0-linux-amd64.tar.gz
│ ├── docker-20.10.21.tgz
│ ├── etcd-v3.5.6-linux-amd64.tar.gz
│ └── nerdctl-1.5.0-linux-amd64.tar.gz
├── py_join.py
├── rely
│ ├── centos7
│ │ ├── bash-completion-2.1-8.el7.noarch.rpm
│ │ ├── cpp-4.8.5-44.el7.x86_64.rpm
│ │ ├── device-mapper-1.02.170-6.el7_9.5.x86_64.rpm
│ │ ├── device-mapper-event-1.02.170-6.el7_9.5.x86_64.rpm
│ │ ├── device-mapper-event-libs-1.02.170-6.el7_9.5.x86_64.rpm
│ │ ├── device-mapper-libs-1.02.170-6.el7_9.5.x86_64.rpm
│ │ ├── device-mapper-persistent-data-0.8.5-3.el7_9.2.x86_64.rpm
│ │ ├── dstat-0.7.2-12.el7.noarch.rpm
│ │ ├── epel-release-7-11.noarch.rpm
│ │ ├── gcc-4.8.5-44.el7.x86_64.rpm
│ │ ├── gdisk-0.8.10-3.el7.x86_64.rpm
│ │ ├── glibc-2.17-326.el7_9.x86_64.rpm
│ │ ├── glibc-common-2.17-326.el7_9.x86_64.rpm
│ │ ├── glibc-devel-2.17-326.el7_9.x86_64.rpm
│ │ ├── glibc-headers-2.17-326.el7_9.x86_64.rpm
│ │ ├── gpm-libs-1.20.7-6.el7.x86_64.rpm
│ │ ├── iotop-0.6-4.el7.noarch.rpm
│ │ ├── libgcc-4.8.5-44.el7.x86_64.rpm
│ │ ├── libgomp-4.8.5-44.el7.x86_64.rpm
│ │ ├── libmpc-1.0.1-3.el7.x86_64.rpm
│ │ ├── lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64.rpm
│ │ ├── lrzsz-0.12.20-36.el7.x86_64.rpm
│ │ ├── lsof-4.87-6.el7.x86_64.rpm
│ │ ├── lvm2-2.02.187-6.el7_9.5.x86_64.rpm
│ │ ├── lvm2-libs-2.02.187-6.el7_9.5.x86_64.rpm
│ │ ├── mpfr-3.1.1-4.el7.x86_64.rpm
│ │ ├── net-tools-2.0-0.25.20131004git.el7.x86_64.rpm
│ │ ├── ntpdate-4.2.6p5-29.el7.centos.2.x86_64.rpm
│ │ ├── psmisc-22.20-17.el7.x86_64.rpm
│ │ ├── python-chardet-2.2.1-3.el7.noarch.rpm
│ │ ├── python-kitchen-1.1.1-5.el7.noarch.rpm
│ │ ├── screen-4.1.0-0.27.20120314git3c2946.el7_9.x86_64.rpm
│ │ ├── sysstat-10.1.5-20.el7_9.x86_64.rpm
│ │ ├── telnet-0.17-66.el7.x86_64.rpm
│ │ ├── tree-1.6.0-10.el7.x86_64.rpm
│ │ ├── unzip-6.0-24.el7_9.x86_64.rpm
│ │ ├── vim-common-7.4.629-8.el7_9.x86_64.rpm
│ │ ├── vim-enhanced-7.4.629-8.el7_9.x86_64.rpm
│ │ ├── vim-filesystem-7.4.629-8.el7_9.x86_64.rpm
│ │ ├── yum-utils-1.1.31-54.el7_8.noarch.rpm
│ │ └── zip-3.0-11.el7.x86_64.rpm
│ └── openeuler
│ ├── binutils-2.37-19.oe2203sp2.x86_64.rpm
│ ├── bpftool-5.10.0-153.25.0.101.oe2203sp2.x86_64.rpm
│ ├── curl-7.79.1-23.oe2203sp2.x86_64.rpm
│ ├── dnf-4.14.0-15.oe2203sp2.noarch.rpm
│ ├── dnf-data-4.14.0-15.oe2203sp2.noarch.rpm
│ ├── file-5.41-3.oe2203sp2.x86_64.rpm
│ ├── file-libs-5.41-3.oe2203sp2.x86_64.rpm
│ ├── gawk-5.1.1-5.oe2203sp2.x86_64.rpm
│ ├── gnutls-3.7.2-9.oe2203sp2.x86_64.rpm
│ ├── gnutls-utils-3.7.2-9.oe2203sp2.x86_64.rpm
│ ├── grub2-common-2.06-33.oe2203sp2.noarch.rpm
│ ├── grub2-pc-2.06-33.oe2203sp2.x86_64.rpm
│ ├── grub2-pc-modules-2.06-33.oe2203sp2.noarch.rpm
│ ├── grub2-tools-2.06-33.oe2203sp2.x86_64.rpm
│ ├── grub2-tools-efi-2.06-33.oe2203sp2.x86_64.rpm
│ ├── grub2-tools-extra-2.06-33.oe2203sp2.x86_64.rpm
│ ├── grub2-tools-minimal-2.06-33.oe2203sp2.x86_64.rpm
│ ├── kernel-5.10.0-153.25.0.101.oe2203sp2.x86_64.rpm
│ ├── kernel-devel-5.10.0-153.25.0.101.oe2203sp2.x86_64.rpm
│ ├── kernel-headers-5.10.0-153.25.0.101.oe2203sp2.x86_64.rpm
│ ├── kernel-tools-5.10.0-153.25.0.101.oe2203sp2.x86_64.rpm
│ ├── krb5-devel-1.19.2-9.oe2203sp2.x86_64.rpm
│ ├── krb5-libs-1.19.2-9.oe2203sp2.x86_64.rpm
│ ├── libcurl-7.79.1-23.oe2203sp2.x86_64.rpm
│ ├── libnghttp2-1.46.0-4.oe2203sp2.x86_64.rpm
│ ├── libsmbclient-4.17.5-7.oe2203sp2.x86_64.rpm
│ ├── libtiff-4.3.0-31.oe2203sp2.x86_64.rpm
│ ├── libtiff-devel-4.3.0-31.oe2203sp2.x86_64.rpm
│ ├── libwbclient-4.17.5-7.oe2203sp2.x86_64.rpm
│ ├── ncurses-6.3-12.oe2203sp2.x86_64.rpm
│ ├── ncurses-base-6.3-12.oe2203sp2.noarch.rpm
│ ├── ncurses-libs-6.3-12.oe2203sp2.x86_64.rpm
│ ├── ntp-4.2.8p15-11.oe2203sp2.x86_64.rpm
│ ├── ntp-help-4.2.8p15-11.oe2203sp2.noarch.rpm
│ ├── ntpstat-0.6-4.oe2203sp2.noarch.rpm
│ ├── openssh-8.8p1-21.oe2203sp2.x86_64.rpm
│ ├── openssh-clients-8.8p1-21.oe2203sp2.x86_64.rpm
│ ├── openssh-server-8.8p1-21.oe2203sp2.x86_64.rpm
│ ├── openssl-1.1.1m-22.oe2203sp2.x86_64.rpm
│ ├── openssl-devel-1.1.1m-22.oe2203sp2.x86_64.rpm
│ ├── openssl-libs-1.1.1m-22.oe2203sp2.x86_64.rpm
│ ├── pcre2-10.39-9.oe2203sp2.x86_64.rpm
│ ├── pcre2-devel-10.39-9.oe2203sp2.x86_64.rpm
│ ├── perl-5.34.0-9.oe2203sp2.x86_64.rpm
│ ├── perl-devel-5.34.0-9.oe2203sp2.x86_64.rpm
│ ├── perl-libs-5.34.0-9.oe2203sp2.x86_64.rpm
│ ├── procps-ng-4.0.2-10.oe2203sp2.x86_64.rpm
│ ├── python3-3.9.9-25.oe2203sp2.x86_64.rpm
│ ├── python3-dnf-4.14.0-15.oe2203sp2.noarch.rpm
│ ├── python3-perf-5.10.0-153.25.0.101.oe2203sp2.x86_64.rpm
│ ├── samba-client-libs-4.17.5-7.oe2203sp2.x86_64.rpm
│ ├── samba-common-4.17.5-7.oe2203sp2.x86_64.rpm
│ ├── sqlite-3.37.2-6.oe2203sp2.x86_64.rpm
│ └── yum-4.14.0-15.oe2203sp2.noarch.rpm
└── repo
    ├── centos7
    │ ├── config.toml
    │ ├── conntrack-tools-1.4.4-7.el7.x86_64.rpm
    │ ├── cri-tools-1.26.0-0.x86_64.rpm
    │ ├── kubeadm-1.25.4-0.x86_64.rpm
    │ ├── kubectl-1.25.4-0.x86_64.rpm
    │ ├── kubelet-1.25.4-0.x86_64.rpm
    │ ├── kubernetes-cni-1.2.0-0.x86_64.rpm
    │ ├── libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
    │ ├── libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
    │ ├── libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
    │ └── socat-1.7.3.2-2.el7.x86_64.rpm
    └── openeuler
        ├── conntrack-tools-1.4.6-6.oe2203sp2.x86_64.rpm
        ├── containernetworking-plugins-1.1.1-2.oe2203sp2.x86_64.rpm
        ├── cri-tools-1.26.0-0.x86_64.rpm
        ├── ebtables-2.0.11-10.oe2203sp2.x86_64.rpm
        ├── kubeadm-1.25.4-0.x86_64.rpm
        ├── kubectl-1.25.4-0.x86_64.rpm
        ├── kubelet-1.25.4-0.x86_64.rpm
        ├── libnetfilter_cthelper-1.0.0-16.oe2203sp2.x86_64.rpm
        ├── libnetfilter_cttimeout-1.0.0-15.oe2203sp2.x86_64.rpm
        ├── libnetfilter_queue-1.0.5-2.oe2203sp2.x86_64.rpm
        └── socat-1.7.3.2-8.oe2203sp2.x86_64.rpm

10 directories, 142 files

In addition to the main K8S dependent programs and offline images, it also contains daily operation and maintenance tool command packages.

Execute initialization script (all nodes)

[root@node1 ~]# chmod + x 01-rhel_init.sh
[root@node1 ~]# sh 01-rhel_init.sh all
Perform user detection: [ok]
Operating system detection: [ok]
External network permission check: [ok]
CPU configuration check: [ok]
Memory configuration check: [ok]
Turn off firewall: [ok]
Close swap partition: [ok]
History command format [ok]
node1 2023-09-14 10:15:06: Installation failed, please ignore! ! !
# ntpdate is not installed, start the installation....
ntpdate installed successfully: [failed]
Time synchronization detection: [failed]
Add kernel parameters: [ok]
modprobe: FATAL: Module ip_vs_fo not found.
Enable ipvs module: [ok]
node1 2023-09-14 10:20:12: Current kernel (3.10.0) is lower than 4.19. Starting update...
-------------------------------------------------- ----------------------------
[node1 2023-09-14 10:20:49] The kernel has been updated. Please restart the server after confirming that it is correct! ! !
-------------------------------------------------- ----------------------------
[root@node1 ~]# reboot

Install runtime (all nodes)

[root@node1 ~]# chmod + x 02-containerd-install.sh
[root@node1 ~]# sh 02-containerd-install.sh
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
Containerd containerd-1.6.10 is installed and configured as a systemd service!
Use the following command to test whether the installation is successful: nerdctl run -d -p 8080:80 --name nginx nginx:alpine
[root@node1 ~]# nerdctl ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Initialize kubernetes

[root@node1 ~]# chmod + x 03-kubeadm-mater1-init.sh
[root@node1 ~]# sh 03-kubeadm-mater1-init.sh all
hosts write: [ok]
ipvs detection: [ok]
Kernel detection: [ok]
containerd detection: [ok]
System check: [ok]
The current system is centos and the version is 7
[node1 2023-09-14 10:22:44] kubeadm is not installed, starting offline installation
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
kubeadm installation: [ok]
[node1 2023-09-14 10:23:18] Start importing offline images
[node1 2023-09-14 10:24:01] kubeadm begins to initialize the master node
K8s initialization: [ok]
[node1 2023-09-14 10:24:12] The joining connections of master and worker nodes are as follows
Control plane information:
kubeadm join apiserver.cluster.local:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:5138ec691afbbb3c52e1d6aae6f31374d756f82a73de0a3000d9c02af483e633 \
        --control-plane

Worker node information:
kubeadm join apiserver.cluster.local:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:5138ec691afbbb3c52e1d6aae6f31374d756f82a73de0a3000d9c02af483e633 \

To join the master node, you need to hold the token with the –control-plane parameter to execute it.

node2 execution

[root@node2 ~]# kubeadm join apiserver.cluster.local:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:5138ec691afbbb3c52e1d6aae6f31374d756f82a73de0a3000d9c02af483e633 \
> --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [apiserver.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node2] and IPs [10.96.0.1 192.168.111.131 192.168.111.130 192.168 .111.133 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node2] and IPs [192.168.111.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node2] and IPs [192.168.111.131 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node node2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node2 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

node3 execution

Since node3 is an openeuler system, it is the same as executing the script before.

Execute the 01-openeuler-init.sh script to perform initialization operations.

Execute 02-containerd-install.sh to install the runtime

Execute 04-kubeadm-mater-install.sh to install kubeadm and other components and import offline images

[root@node3 ~]# chmod + x 02-containerd-install.sh
[root@node3 ~]# sh 02-containerd-install.sh
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
Containerd containerd-1.6.10 is installed and configured as a systemd service!
Use the following command to test whether the installation is successful: nerdctl run -d -p 8080:80 --name nginx nginx:alpine
[root@node3 ~]# chmod + x 04-kubeadm-mater-install.sh
[root@node3 ~]# sh 04-kubeadm-mater-install.sh all
hosts write: [ok]
ipvs detection: [ok]
Kernel detection: [ok]
containerd detection: [ok]
System check: [ok]
The current system is openEuler, version 22.03
[node3 2023-09-14 10:51:57] kubeadm is not installed, starting offline installation
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
kubeadm installation: [ok]
[node3 2023-09-14 10:52:22] Start importing offline images
K8s initialization: [ok]
[root@node3 ~]# kubeadm join apiserver.cluster.local:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:5138ec691afbbb3c52e1d6aae6f31374d756f82a73de0a3000d9c02af483e633 \
> --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [apiserver.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node3] and IPs [10.96.0.1 192.168.111.133 192.168.111.130 192.168 .111.131 127.0.0.1]
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node3] and IPs [192.168.111.133 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node3] and IPs [192.168.111.133 127.0.0.1 ::1]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node node3 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node3 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

Confirm deployment results

[root@node1 ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 NotReady control-plane 44m v1.25.4 192.168.111.130 <none> CentOS Linux 7 (Core) 4.19.12-1.el7.elrepo.x86_64 containerd://1.6.10
node2 NotReady control-plane 2m5s v1.25.4 192.168.111.131 <none> CentOS Linux 7 (Core) 4.19.12-1.el7.elrepo.x86_64 containerd://1.6.10
node3 NotReady control-plane 37s v1.25.4 192.168.111.133 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.25.0.101.oe2203sp2.x86_64 containerd://1.6.10

Install network plug-in flannel

[root@node1 ~]# wget -c https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@node1 ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@node1 ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready control-plane 50m v1.25.4 192.168.111.130 <none> CentOS Linux 7 (Core) 4.19.12-1.el7.elrepo.x86_64 containerd://1.6.10
node2 Ready control-plane 7m46s v1.25.4 192.168.111.131 <none> CentOS Linux 7 (Core) 4.19.12-1.el7.elrepo.x86_64 containerd://1.6.10
node3 Ready control-plane 6m18s v1.25.4 192.168.111.133 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.25.0.101.oe2203sp2.x86_64 containerd://1.6.10

Since then, the deployment of high-availability clusters has been introduced. The scripts and offline software packages will be sorted out and posted at the end of this article. The scripts will be uploaded to github for everyone to improve together.

In fact, there are still many points to be improved. For example, the shell script is divided into three sections, which is not elegant enough. Therefore, I want to use ansible to connect these scripts in a process-based manner and deploy them more elegantly.

The knowledge points of the article match the official knowledge files, and you can further learn relevant knowledge. Python entry skill treeHomepageOverview 345597 people are learning the system