Build a Kubernetes cluster

Prerequisites for deployment

  • Prerequisites for deploying a Kubernetes cluster using kubeadm
  • Linux hosts that support Kubernetes running, such as Debian, RedHat and its variants, etc.
  • More than 2GB of memory per host, and more than 2 CPUs
  • All hosts can communicate through the network barrier-free
  • Exclusive hostname, MAC address and product_uuid, the hostname can be resolved normally
  • Allow ports used by Kubernetes, or disable iptables directly
  • Disable the Swap device on each host
  • Time synchronization of each host

Deployment environment

  • OS: Ubuntu 20.04.2 LTS
  • Docker: 20.10.10, CGroup Driver: systemd
  • Kubernetes: v1.26.3, CRI: containerd, CNI: Flannel
  • the host

Modify host name

# Modify the host name of 192.168.32.200 to k8s-master01.org
# Modify the host name of 192.168.32.203 to k8s-node01.org
# Modify the host name of 192.168.32.204 to k8s-node02.org
# Modify the host name of 192.168.32.205 to k8s-node03.org

Host time synchronization

Install chrony on all hosts

# execute on all hosts
root@k8s-master01:~# apt install -y chrony

It is recommended that users configure and use a local time server, especially when there are a large number of nodes. When there is an available local time server, modify the /etc/chrony/chrony.conf configuration file of the node and point the time server to the corresponding host. The configuration format is as follows:

server CHRONY-SERVER-NAME-OR-IP iburst

Host name resolution

For the purpose of simplifying the configuration steps, this test environment uses the hosts file to resolve the name of each node. The content of the file is as follows. Among them, we use the kubeapi host name as the dedicated access name of the API Server in a high-availability environment, which also leaves room for easy configuration for the high-availability configuration of the control plane.

# Edit the /etc/hosts file to add the following content
root@k8s-master01:~# vim /etc/hosts
192.168.32.200 k8s-master01.org
192.168.32.203 k8s-node01.org
192.168.32.204 k8s-node02.org
192.168.32.205 k8s-node03.org

Disable Swap device

When deploying a cluster, kubeadm will pre-check whether the current host has disabled the Swap device by default, and will forcibly terminate the deployment process if it is not disabled. Therefore, under the condition that the host memory resources are sufficient, all Swap devices need to be disabled. Otherwise, it is necessary to additionally use related options to ignore checking errors when the kubeadm init and kubeadm join commands are executed later.

To close the Swap device, it needs to be completed in two steps. The first is to close all currently enabled Swap devices:

# temporarily closed, all machines execute
root@k8s-master01:~# swapoff -a

Then edit the /etc/fstab configuration file and comment all the lines used to mount the Swap device

# execute on all machines
root@k8s-master01:~#vim /etc/fstab
# Comment out the following line
#/swap.img none swap sw 0 0

Disable the default firewall service

Linux distributions such as Ubuntu and Debian use ufw (Uncomplicated FireWall) as the front end by default to simplify the use of iptables. When enabled, it will generate some rules by default to strengthen system security. For the purpose of reducing configuration complexity, this article chooses to disable it directly.

root@k8s-master01:~# ufw disable
Firewall stopped and disabled on system startup
root@k8s-master01:~# ufw status
Status: inactive
root@k8s-master01:~#

Installation package

Tip: The following operations need to be done separately on all four hosts in this example.

Install and start docker

First, generate the warehouse of docker-ce related packages. Here we take Alibaba Cloud’s mirror server as an example to illustrate:

docker-ce image_docker-ce download address_docker-ce installation tutorial-Alibaba Open Source Mirror Station (aliyun.com)

# step 1: Install some necessary system tools
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# step 2: Install the GPG certificate
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# Step 3: Write software source information
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# Step 4: Update and install Docker-CE
sudo apt-get -y update
sudo apt-get -y install docker-ce

# Install the specified version of Docker-CE:
# Step 1: Find the version of Docker-CE:
# apt-cache madison docker-ce
# docker-ce | 17.03.1~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages
# docker-ce | 17.03.0~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages
# Step 2: Install the specified version of Docker-CE: (VERSION such as 17.03.1~ce-0~ubuntu-xenial above)
# sudo apt-get -y install docker-ce=[VERSION]

This article takes version 20.10.10 as an example

root@k8s-node3:~# apt install docker-ce=5:20.10.10~3-0~ubuntu-focal docker-ce-cli=5:20.10.10~3-0~ubuntu-focal

Kubelet needs to let the docker container engine use systemd as the driver of CGroup, and its default value is cgroupfs. Therefore, we also need to edit the docker configuration file /etc/docker/daemon.json and add the following content, where registry-mirrors are used to specify The image acceleration service used.

vim /etc/docker/daemon.json
{
"registry-mirrors": [
  "https://ung2thfc.mirror.aliyuncs.com",
  "https://mirror.ccs.tencentyun.com",
  "https://registry.docker-cn.com",
  "http://hub-mirror.c.163.com",
  "https://docker.mirrors.ustc.edu.cn"],
"exec-opts": ["native.cgroupdriver=systemd"]
}

systemctl daemon-reload & amp; & amp; systemctl restart docker

Install cri-dockerd

Kubernetes has removed support for docker-shim since v1.24, and Docker Engine does not support the CRI specification by default, so the two cannot be directly integrated. To this end, Mirantis and Docker jointly created the cri-dockerd project to provide Docker Engine with a shim that can support the CRI specification, so that Kubernetes can control Docker based on CRI.

Project address: https://github.com/Mirantis/cri-dockerd

The cri-dockerd project provides a prefabricated program package in binary format. Users can download the corresponding system and platform version as needed to complete the installation. Here, the Ubuntu 2004 64bits system environment and the latest program version v0 of cri-dockerd are used. 3.0 as an example.

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.0/cri-dockerd_0.3.0.3-0.ubuntu-focal_amd64.deb

dpkg -i cri-dockerd_0.3.0.3-0.ubuntu-focal_amd64.deb

After the installation is complete, the corresponding service cri-dockerd.service will be started automatically. We can also use the following command to verify, if the service is in the Running state, we can proceed to the next steps.

root@k8s-master01:~# systemctl status cri-docker.service
● cri-docker.service - CRI Interface for Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/cri-docker.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2023-03-21 10:59:57 CST; 1min 20s ago
TriggeredBy: cri-docker.socket
       Docs: https://docs.mirantis.com
   Main PID: 17591 (cri-dockerd)
      Tasks: 7
     Memory: 11.9M
     CGroup: /system.slice/cri-docker.service
             └─17591 /usr/bin/cri-dockerd --container-runtime-endpoint fd://

Mar 21 10:59:57 k8s-master01.org cri-dockerd[17591]: time="2023-03-21T10:59:57 + 08:00" level=info msg="Start docker client with request timeout 0s"
Mar 21 10:59:57 k8s-master01.org cri-dockerd[17591]: time="2023-03-21T10:59:57 + 08:00" level=info msg="Hairpin mode is set to none"
Mar 21 10:59:57 k8s-master01.org cri-dockerd[17591]: time="2023-03-21T10:59:57 + 08:00" level=info msg="Loaded network plugin cni"
Mar 21 10:59:57 k8s-master01.org cri-dockerd[17591]: time="2023-03-21T10:59:57 + 08:00" level=info msg="Docker cri networking managed by network plugin cni "
Mar 21 10:59:57 k8s-master01.org systemd[1]: Started CRI Interface for Docker Application Container Engine.
Mar 21 10:59:58 k8s-master01.org cri-dockerd[17591]: time="2023-03-21T10:59:57 + 08:00" level=info msg="Docker Info: &{ID :WQBA:P7R2:H6ZI:KWU3:FVFW:MHLC:QTT7:CJCX>
Mar 21 10:59:58 k8s-master01.org cri-dockerd[17591]: time="2023-03-21T10:59:57 + 08:00" level=info msg="Setting cgroupDriver cgroupfs"
Mar 21 10:59:58 k8s-master01.org cri-dockerd[17591]: time="2023-03-21T10:59:57 + 08:00" level=info msg="Docker cri received runtime config & amp; RuntimeConfig{Network>
Mar 21 10:59:58 k8s-master01.org cri-dockerd[17591]: time="2023-03-21T10:59:57 + 08:00" level=info msg="Starting the GRPC backend for the Docker CRI interface."
Mar 21 10:59:58 k8s-master01.org cri-dockerd[17591]: time="2023-03-21T10:59:57 + 08:00" level=info msg="Start cri-dockerd grpc backend"
lines 1-21/21 (END)

Install kubelet, kubeadm and kubectl

First, generate warehouses for kubelet, kubeadm and other related packages on each host. Here, take Alibaba Cloud’s image service as an example:

apt-get update & amp; & amp; apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

After the installation is complete, ensure the version of program files such as kubeadm, which will also be the version number that needs to be clearly specified when initializing the Kubernetes cluster later.

Integrate kubelet and cri-dockerd

Kubelets that only support the CRI specification need to integrate with docker-ce via cri-dockerd that follows the specification.

Configure cri-dockerd

Configure cri-dockerd to ensure that it can be loaded into the CNI plugin correctly. Edit the /usr/lib/systemd/system/cri-docker.service file to ensure that the value of ExecStart in the [Service] configuration section is similar to the following.

ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause :3.7 --cni-bin-dir=/opt/cni/bin --cni-cache-dir=/var/lib/cni/cache --cni-conf-dir=/etc/cni/net.d

Configuration parameters that need to be added (the value of each parameter should correspond to the actual path of the CNI plug-in deployed by the system):

  • –network-plugin: Specifies the type of network plug-in specification, where CNI is used;
  • –cni-bin-dir: Specify the search directory for the binary program files of the CNI plug-in;
  • –cni-cache-dir: the cache directory used by the CNI plugin;
  • –cni-conf-dir: The directory where the CNI plugin loads configuration files;

After the configuration is complete, reload and restart the cri-docker.service service.

root@k8s-master01:~# systemctl daemon-reload;systemctl restart cri-docker

Configure kubelet

Configure the kubelet and specify the path of the Unix Sock file opened by cri-dockerd locally. The path generally defaults to “/run/cri-dockerd.sock”. Edit the file /etc/sysconfig/kubelet and add the specified parameters as follows.

Tip: If the /etc/sysconfig directory does not exist, you need to create this directory first.

KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/run/cri-dockerd.sock"

It should be noted that this configuration can also be omitted, but directly use the “–cri-socket unix:///run/cri-dockerd.sock” option on each subsequent kubeadm command.

Initialize the first master node

This step starts to try to build the master node of the Kubernetes cluster. After the configuration is complete, each worker node can be directly added to the cluster. It should be noted that on the Kubernetes cluster deployed by kubeadm, the core components of the cluster, kube-apiserver, kube-controller-manager, kube-scheduler, and etcd, will all run as static Pods, and the image files they depend on default from On top of the Registry service registry.k8s.io. However, we cannot directly access the service. There are two commonly used solutions:

  • use a proxy service capable of reaching the service;
  • Use services on domestic mirror servers, such as registry.aliyuncs.com/google_containers, etc.

Initialize the master node (complete the following operations on k8s-master01)

Run the following command to complete the initialization of the k8s-master01 node:

kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.26.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --token-ttl=0 --cri-socket unix:///run/cri-dockerd.sock

A brief description of each option in the command is as follows:

  • –image-repository: Specify the mirror warehouse to use, the default is registry.k8s.io;
  • –kubernetes-version: The version number of the kubernetes program component, which must be the same as the version number of the installed kubelet package;
  • –control-plane-endpoint: The fixed access endpoint of the control plane, which can be an IP address or DNS name, and will be used as the access address of the API Server in the kubeconfig configuration file of the cluster administrator and cluster components; it can be omitted when deploying a single control plane Use this option;
  • –pod-network-cidr: Pod network address range, its value is a network address in CIDR format, usually, the default value of the Flannel network plug-in is 10.244.0.0/16, and the default value of the Project Calico plug-in is 192.168.0.0/16;
  • –service-cidr: The network address range of Service, its value is a network address in CIDR format, the default is 10.96.0.0/12; usually, only network plug-ins such as Flannel need to manually specify this address;
  • –apiserver-advertise-address: The IP address that the apiserver advertises to other components, generally it should be the IP address of the Master node for internal communication in the cluster, and 0.0.0.0 means all available addresses on the node;
  • –token-ttl: The expiration time of the shared token (token), which defaults to 24 hours, and 0 means it will never expire; in order to prevent the token leakage caused by insecure storage and other reasons from endangering the security of the cluster, it is recommended to set an expiration time for it. When this option is not set, if you want to add other nodes to the cluster after the token expires, you can use the following command to recreate the token and generate a node join command.
Operation steps after initialization

For new users of the Kubernetes system, no matter which of the above methods is used, after the command finishes running, please record the last operation steps prompted by the output of the last kubeadm join command. The following content is an example of command output that needs to be recorded by the user, which prompts the next required operation steps.

# The following is the prompt information for successfully completing the initialization of the first control plane node and the subsequent steps to be completed
Your Kubernetes control-plane has initialized successfully!

# In order to complete the initialization operation, the administrator needs to manually complete several necessary steps
To start using your cluster, you need to run the following as a regular user:

# The first step prompts the kubeconfig configuration file used by the Kubernetes cluster administrator to authenticate to the Kubernetes cluster
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# We can also not make the above settings, but use the environment variable KUBECONFIG to specify the default kubeconfig for kubectl, etc.;
Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

# The second step prompts to deploy a network plug-in for the Kubernetes cluster, and the specific plug-in selected depends on the administrator;
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

# The third step prompts to add additional control plane nodes to the cluster, but this step will be skipped in this article, and its implementation will be introduced in other articles.
You can now join any number of the control-plane node running the following command on each as root:

# The fourth step prompts to add a worker node to the cluster
Then you can join any number of worker nodes by running the following on each as root:

# Run commands similar to the following as the root user on each working node where kubeadm and other packages are deployed;
# Tip: When using docker-ce as a container runtime in combination with cri-dockerd, you usually need to use the following commands
# Additional "--cri-socket unix:///run/cri-dockerd.sock" option;
kubeadm join 192.168.32.200:6443 --token ivu3t7.pogk70dd5pualoz2 \
--discovery-token-ca-cert-hash sha256:3edb3c8e3e6c944afe65b2616d46b49305c1420e6967c1fab966ddf8f149502d

Configure kubectl

kubectl is the command-line client program of kube-apiserver, which implements almost all management operations except system deployment, and is one of the most frequently used commands by kubernetes administrators. Kubectl can perform corresponding management operations only after being authenticated and authorized by the API server. The cluster deployed by kubeadm generates an authentication configuration file /etc/kubernetes/admin.conf with administrator privileges, which can be used by kubectl through the default “$ HOME/.kube/config” path to load. Of course, users can also use the –kubeconfig option on the kubectl command to specify a different location.

Copy the configuration file authenticated as the Kubernetes system administrator to the home directory of the target user (for example, the current user root):

~# mkdir ~/.kube

~# cp /etc/kubernetes/admin.conf ~/.kube/config

Deploying web plugins

The implementation of the Pod network on the Kubernetes system relies on third-party plug-ins. There are nearly dozens of such plug-ins. The more famous ones are flannel, calico, canal, and kube-router. The easy-to-use implementation is provided for CoreOS The flannel project. The following command is used to deploy flannel online to the Kubernetes system:

First, download flanneld that adapts to the system and hardware platform environment to each node, and place it in the /opt/bin/ directory. We choose flanneld-amd64 here, and the latest version is v0.21.3. Therefore, we need to execute the following command on each node of the cluster:

~# mkdir /opt/cni/bin/

~# curl -L https://github.com/flannel-io/flannel/releases/download/v0.20.2/flanneld-amd64 -o /opt/cni/bin/flanneld

~# chmod +x /opt/cni/bin/flanneld

Tip: The address to download flanneld is https://github.com/flannel-io/flannel/releases

Then, run the following command on the first initialized master node k8s-master01 to deploy kube-flannel to Kubernetes.

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/v0.21.3/Documentation/kube-flannel.yml

Then use the following command to confirm that the Pod status in the output is “Running”, similar to the following command and its input results:

root@k8s-master01:~# kubectl get pods -n kube-flannel
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-jgkxd 1/1 Running 0 2m59s
root@k8s-master01:~#

Verify that the master node is ready

kubectl get nodes

The above command should get an output similar to the following, which means that the k8s-master01 node is ready

root@k8s-master01:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.org Ready control-plane 62m v1.26.3
root@k8s-master01:~#

Add a node to the cluster

The following two steps need to be completed on k8s-node01, k8s-node02 and k8s-node03 respectively.

1. If the Swap device is not disabled, edit the kubelet configuration file /etc/default/kubelet and set it to ignore the status error of Swap enabled. The content is as follows: KUBELET_EXTRA_ARGS=”–fail-swap-on=false”

2. To add the node to the master cluster created in the second step, use the kubeadm join command recorded during the master node initialization process;

root@k8s-node01:/opt/cni/bin# kubeadm join 192.168.32.200:6443 --token ivu3t7.pogk70dd5pualoz2 --discovery-token-ca-cert-hash sha256:3edb3c8e3e6c944afe65b2616d46b49305c1420e6967c1fab966ddf8f149502d --cri-socket unix: ///run/cri-dockerd.sock

Verification node addition result

After each node is added, you can verify the addition result through kubectl. The following command and its output are run after all three nodes have been added, and the output shows that the three Worker Nodes are ready.

~# kubectl get nodes

root@k8s-master01:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.org Ready control-plane 80m v1.26.3
k8s-node01.org Ready <none> 15m v1.26.3
k8s-node2.org Ready <none> 114s v1.26.3
k8s-node3.org Ready <none> 11m v1.26.3
root@k8s-master01:~#

Test application orchestration and service access

So far, a master and a kubernetes cluster infrastructure with three workers have been deployed, and users can then test its core functions.

root@k8s-master01:~# kubectl create deployment test-nginx --image=nginx:latest --replicas=3
deployment.apps/test-nginx created
root@k8s-master01:~#
root@k8s-master01:~# kubectl create service nodeport test-nginx --tcp=80:80
service/test-nginx created
root@k8s-master01:~#

Then, use the following command to know the NodePort used by the Service object test-nginx, so that it can be accessed outside the cluster:

root@k8s-master01:~# kubectl get svc -l app=test-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-nginx NodePort 10.100.229.239 <none> 80:31888/TCP 50s
root@k8s-master01:~#

Therefore, users can access the web application through the URL “http://NodeIP:31888” outside the cluster, for example, access “http://192.168.32.203:31888” through a browser outside the cluster.

Summary

This article gives the specific steps to deploy a Kubernetes distributed cluster, and finally tests the results of deploying and running the application on the Kubernetes system. When readers and friends test by themselves, the versions of cri-dockerd, docker-ce, flannel, kubeadm, kubectl, and kubelet may be different in version, and therefore there may be a certain degree of configuration difference. Please refer to the specific adjustment method Refer to the relevant documents by yourself.

syntaxbug.com © 2021 All Rights Reserved.
Host IP Host Name Role
192.168.32.200 k8s-master01.org master
192.168.32.203 k8s -node01.org node01
192.168.32.204 k8s-node02.org node02
192.168.32.205 k8s-node03.org node03