Containerd+Kubernetes build k8s cluster

Containerd + Kubernetes to build a k8s cluster

  • Documentation
  • The version and download address of the installed software
  • Environmental description
    • server preparation
    • Load balancing IP address preparation
  • installation steps
    • environment settings
      • turn off firewall
      • close SELinux
      • Close the swap partition
      • Set up hostname resolution
      • set hostname
      • Load the br_netfilter module
      • Bridged IPv4 traffic passed to iptables
      • Upgrade the operating system kernel
        • import elrepo gpg key
        • Install elrepo YUM source repository
        • Install kernel-lt version
        • Set grub2 default boot to 0
        • Regenerate the grub2 boot file
        • reboot
      • Install ipset and ipvsadm
      • Configure ipvsadm module loading method
    • Install containerd
      • Download containerd
      • Unzip containerd
      • Generate containerd configuration file
      • Start containerd and set it to start automatically
    • install runc
      • Download libseccomp
      • Install libseccomp
      • download runc
      • install runc
    • install kubernetes
      • Configure the yum source of kubernetes
      • install kubernetes
      • Configure cgroup driver to systemd
  • Build a kubernetes cluster
    • Initialize the cluster
    • Set kubelet to start automatically at boot
  • Install the Calico network plugin
  • Install MetalLB load balancer
    • Modify the configuration file of kube-proxy
    • Install MetalLB
    • Assign an IP address to MetalLB
  • deploy application
    • deploy nginx
    • Expose to external network access

Video tutorial address: https://space.bilibili.com/3461573834180825/channel/seriesdetail?sid=3316691

Document description

I wrote an article on installing kubernetes based on docker before. In this document, we use containerd to install kubernetes. Compared with docker, containerd is more efficient when running containers and is compatible with docker images. The article address of installing kubernetes based on docker: https://blog.csdn.net/m0_51510236/article/details/123477488

Before installing kubernetes, you need to prepare virtual machines or physical machines with the following configurations for installation (at least three):

  • CPU: 2 cores
  • Memory: 2GB
  • Hard disk: 50GB

The above is the minimum configuration, you can improve the configuration according to your own hardware conditions.

Version and download address of the installed software

The corresponding software version and download address are as follows:

Software Name (Software Description) Version Download Address
CentOS (operating system) 7-2207-02 Aliyuan/Tsinghuayuan
kubernetes (Container orchestration tool) 1.26.5 Provided in the document
containerd (container service) 1.6.21 Click to download
libseccomp (calculation plug-in) 2.5.4 Click Download
runc (container runtime) 1.1.7 Click to download
Calico (network plug-in) 3.25 Provided in the document
Metallb (load balancing plug-in) v0.13.9 Provided in the document

Some software is downloaded from foreign servers, if the download is too slow, you can private message me to get it

Environment description

Server preparation

server list:

< /table>

All three servers are ready:

You also need to ensure that these servers can ping each other

Load balancing IP address preparation

Metallb’s loadbalancer needs some reserved IP addresses for load balancing IP address allocation. The IP addresses reserved in this article are 192.168.79.60~192.168.79.69

Installation steps

Environment settings

Next, we need to perform some operations on the three servers at the same time, so you can click Tool (T) on the toolbar in Xshell -> Send key input to (K) -> Linked session (C) to ensure that a command can be executed in three terminals. Different ssh tools have different settings, please check by yourself.

There are a lot of steps, remember not to miss it

Turn off the firewall

Turn off the firewall with the following command:

systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld

Send it to three terminals at the same time. Check the execution results and you can see that the firewalls are all closed:

Turn off SELinux

Turn off SELinux with the following command:

sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

Close the swap partition

Close the swap partition with the following command:

# close permanently
swapLine=$(cat /etc/fstab | grep swap | awk '{print $1}')
sed -i "s/$swapLine/#$swapLine/" /etc/fstab
# temporarily close
swapoff -a

After success, use the free -h command to see that the size of the swap partition has changed to 0

Set host name resolution

Use the following command to set the host name resolution, but you need to pay attention to change it to correspond to your own IP address:

cat >> /etc/hosts << EOF
192.168.79.50 k8s-master
192.168.79.52 k8s-node01
192.168.79.54 k8s-node02
EOF

Set hostname

This setting needs to be set individually per host, so you can temporarily turn off outputting commands to all sessions by clicking the OFF button in the upper right corner of the terminal:

Commands executed per server

  • 192.168.79.50: hostnamectl set-hostname k8s-master
  • 192.168.79.52: hostnamectl set-hostname k8s-node01
  • 192.168.79.54: hostnamectl set-hostname k8s-node02

After setting, remember to click the ON button in the upper right corner of each terminal to open and output commands to all terminals at the same time:

Load the br_netfilter module

Since the br_netfilter module needs to be loaded to enable kernel ipv4 forwarding, this module is loaded, but this module will not be loaded by default, so we need to set this module to automatically load after booting

# Set the module to be automatically loaded at startup
cat >> /etc/rc.d/rc.local << EOF
/usr/sbin/modprobe br_netfilter
EOF
chmod +x /etc/rc.d/rc.local
# load immediately
modprobe br_netfilter

Bridged IPv4 traffic passed to iptables

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward = 1
EOF
sysctl --system

Upgrade the operating system kernel

Because the version installed this time is 1.26.5, which is the second newest version, the installation may fail due to a lower kernel version, so we perform this step to upgrade the kernel to the latest stable version

import elrepo gpg key

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

Install elrepo YUM source repository

yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm

Install the kernel-lt version

m1 is a long-term stable version, lt is a long-term maintenance version

yum --enablerepo="elrepo-kernel" -y install kernel-lt.x86_64

Set grub2 default boot to 0

grub2-set-default 0

Regenerate the grub2 boot file

grub2-mkconfig -o /boot/grub2/grub.cfg

Restart

After all the above commands are executed, use the reboot command to restart the computer to make the upgraded kernel take effect

After restarting, use uname -r to view the system kernel version. Different upgrade times may lead to different stable kernel versions:

Install ipset and ipvsadm

When there are many requests, the response may be too slow. Installing these two software can improve the forwarding speed of ipvs to a certain extent.

yum install -y ipset ipvsadm

Configure ipvsadm module loading method

Add the modules that need to be loaded

cat > /etc/sysconfig/modules/ipvs.modules << EOF
modprobe --ip_vs
modprobe --ip_vs_rr
modprobe --ip_vs_wrr
modprobe --ip_vs_sh
modprobe --nf_conntrack
EOF

Authorize, run, check whether it is loaded

chmod 755 /etc/sysconfig/modules/ipvs.modules & amp; & amp; bash /etc/sysconfig/modules/ipvs.modules & amp; & amp; lsmod | grep -e ip_vs -e nf_conntrack

View Results:

The environment setting part has been completed, and now we can start to formally click through the software

Install containerd

Download containerd

containerd can be downloaded with the following command:

wget https://github.com/containerd/containerd/releases/download/v1.6.21/cri-containerd-cni-1.6.21-linux-amd64.tar.gz -O /usr/local/src/ cri-containerd-cni-1.6.21-linux-amd64.tar.gz

Because the download on github is slow, in order to save time, I downloaded it in advance and uploaded it to the server:

Unzip containerd

Containerd needs to be decompressed to the root directory, use the command:

tar -zxvf cri-containerd-cni-1.6.21-linux-amd64.tar.gz -C /

You can view the containerd version number:

containerd -version

Results of the:

Generate configuration file for containerd

Generate a default configuration file for containerd with the following command:

mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml

The configuration file (/etc/containerd/config.toml) needs to change the version number and mirror address of sandbox_image, because the default mirror address is on Google , Google cannot access domestically

Default: sandbox_image=”registry.k8s.io/pause:3.6″

Target value: sandbox_image = “registry.aliyuncs.com/google_containers/pause:3.9”

Modify the result:

Start containerd and set boot self-start

systemctl enable --now containerd

Install runc

containerd comes with runc, but there are some problems with it, so we need to install the stable version of runc:

Download libseccomp

Because runc depends on the libseccomp computing library, we use the following command to download the source code package of libseccomp:

wget https://github.com/opencontainers/runc/releases/download/v1.1.7/libseccomp-2.5.4.tar.gz

Download completed:

Install libseccomp

First, let’s decompress the source package of libseccomp:

tar -zxvf libseccomp-2.5.4.tar.gz

Then we need to install the c language compilation tool and the gperf dependency of libseccomp

yum install -y gcc gcc -c++ gperf

Execute the installation

cd libseccomp-2.5.4
./configure
make & amp; & amp; make install

Check if the installation is successful:

find / -name libseccomp.so

search result:

Download runc

Download with the following command:

wget https://github.com/opencontainers/runc/releases/download/v1.1.7/runc.amd64

In the same way, because github downloads are slow, I downloaded it in advance and uploaded it to the server:

Install runc

# Delete the runc that comes with containerd
rm -rf /usr/local/sbin/runc
# Give execution permission to our own runc
chmod +x runc.amd64
# Copy runc to the installation directory
mv runc.amd64 /usr/local/sbin/runc

Execute runc again and find that no error is reported:

Install kubernetes

Configure the yum source of kubernetes

The yum source of Alibaba Cloud needs to be configured, because the default yum source of the official document is in Google, which cannot be accessed in China. You can execute the following command to configure the yum source:

cat <<EOF> /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

Install kubernetes

After configuring the yum source, you can directly use the yum command to install kubernetes

yum install -y kubeadm-1.26.5 kubectl-1.26.5 kubelet-1.26.5

Configure the cgroup driver to systemd

We need to change the cgroup of kubernetes to systemd, and we need to change the content of /etc/sysconfig/kubelet to KUBELET_EXTRA_ARGS="--cgroup-driver=systemd", Use the following command:

sed -i 's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"/g' /etc/sysconfig/kubelet

after modification:

Build a kubernetes cluster

We have completed the installation of kubernetes above, and then we will start to build the cluster

Initialize the cluster

This step only needs to be executed on the master, so click the OFF button on the master to turn off sending commands to all terminals:

We can use the following command to initialize (note that the IP address of the master is modified):

kubeadm init \
--apiserver-advertise-address=192.168.79.50 \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.26.5 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket=unix:///var/run/containerd/containerd.sock

Configuration content parsing

  • --apiserver-advertise-address=192.168.79.50: the address of k8s-master, pay attention to modify it to your own master address
  • --image-repository=registry.aliyuncs.com/google_containers: The default image pull address is Google, so here is changed to the image address of Alibaba Cloud
  • --kubernetes-version=v1.26.5: version number of kubernetes
  • --service-cidr=10.96.0.0/12: network segment of service
  • --pod-network-cidr=10.244.0.0/16: The network segment address of the pod. Note that this address will also be used when installing the Calico network plug-in later
  • --cri-socket=unix:///var/run/containerd/containerd.sock: set the socket of cri to use the sock of containerd

Seeing this means that the initialization is successful:

Follow the prompts to execute the code locally:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

The code for remote execution needs to be changed, and --cri-socket=unix:///var/run/containerd/containerd.sock needs to be added, so execute:

kubeadm join 192.168.79.50:6443 --token 9i3lvb.2m4aoo1672t4edra \
--discovery-token-ca-cert-hash sha256:a41c37db5378cd1fad77a2537a3dd64117465c4a33c9de7825abc3daa847b8d0 \
--cri-socket=unix:///var/run/containerd/containerd.sock

You can view the results of joining the cluster:

Use the kubectl get nodes -o wide command to view the joined clusters:

Set kubelet to start automatically at boot

You need to set kubelet to start automatically at boot, so that kubernetes will run automatically when booting

systemctl enable --now kubelet

Install the Calico web plugin

The official website address of the corresponding version of Calico: https://docs.tigera.io/calico/3.25/getting-started/kubernetes/quickstart

You can use this command on the official website to install Calico in the master:

Order:

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/tigera-operator.yaml

View execution results:

It will take a while to install successfully, because you need to pull the plug-in, you can use the command to view the installation result:

kubectl get all -n tigera-operator

Seeing this means that the installation is:

You also need to customize the resource installation. You cannot directly execute this file. You need to download this file and change something:

Download with command:

wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/custom-resources.yaml
cat custom-resources.yaml

To view the content, we need to change the pod network segment to 10.244.0.0/16 set when we initialized the cluster:

You can modify it with the command:

sed -i 's/cidr: 192.168.0.0/cidr: 10.244.0.0/g' custom-resources.yaml
cat custom-resources.yaml

Check out the modified results:

It can now be executed directly:

kubectl create -f custom-resources.yaml

View execution results:

It takes a long time, so monitor the creation result:

watch kubectl get all -o wide -n calico-system

When the STATUS all changes to Running, it means the installation is successful:

Now you can start the next step

Install MetalLB load balancer

metallb is a load balancer for kubernetes Service to expose LoadBalancer, official website address: https://metallb.universe.tf/installation/

Modify the configuration file of kube-proxy

Modify according to the meaning of the official website:

Excuting an order:

kubectl edit configmap -n kube-system kube-proxy

You only need to modify the places I marked:

Install MetalLB

Execute the following command to install:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.9/config/manifests/metallb-native.yaml

View execution results (files downloaded in advance):

Use the command to view the deployment results:

kubectl get all -n metallb-system

Similarly, if all STATUS are in the Running state, the installation is successful:

Allocate an IP address for MetalLB

We may create multiple services that are exposed to the outside world, so we need to assign multiple unused IP addresses to MetalLB, and add a new metallb-ip-pool.yaml file. The content of the file is:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  # Pay attention to change to the IP address you assigned for MetalLB
  - 192.168.79.60-192.168.79.69

---

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool

View file content:

Execute this file:

kubectl apply -f metallb-ip-pool.yaml

View execution results:

Next we can deploy the application

Deploy application

Deploy nginx

I will deploy an nginx program and expose the access address to the external network through the address pool of metallb, and use the command to deploy an nginx:

kubectl create deployment nginx --image=nginx

This command will pull the latest version of nginx from dockerhub and run it. You can view the running results:

Also use the command to view the deployment status

kubectl get deploy,pod -o wide

STATUS is Running:

Exposed to external network access

The application is successfully deployed. Now it is necessary to create a LoadBalancer for external network access. Use the command to expose nginx to the external network, and the service type is LoadBalancer:

kubectl expose deployment nginx --port=80 --type=LoadBalancer

It shows that the exposure is successful:

Use the command to view the external network IP address of the Service:

Next, you can access the IP address 192.168.79.60, and you can see that nginx responded successfully:

Okay, get out of class is over (click to pay attention)

syntaxbug.com © 2021 All Rights Reserved.
Server Name IP Address Configuration Server Purpose
k8s-master 192.168.79.50 2c2g50g kubernetes master node
k8s-node01 192.168.79.52 2c2g50g kubernetes working node 1
k8s-node02 192.168.79.54 2c2g50g kubernetes working node 2