Article directory
- summary
- question
- solve
-
- keepalived settings
- haproxy settings
- Deploy Kubernetes control plane cluster
- refer to
Summary
This article documents the use of keepalived and haproxy to implement high availability (HA) of Kubernetes Control Plane.
VRRP (Virtual Router Redundancy Protocol) – Virtual Router Redundancy Protocol
VIP – Virtual IP, virtual IP (floating IP)
Question
It is necessary to implement high availability (HA) of Kubernetes Control Plane, that is, Kubernetes Control Plane is deployed into a cluster. If a single Kubernetes Control Plane in the cluster is damaged, Kubernetes Control Plane still works.
Solution
Kubernetes Worker Nodes are connected to the Kubernetes Control Plane cluster through load balancing. Load balancing is implemented using keepalived and haproxy.
keepalived settings
Install and start keepalived on three hosts (Master, Node1 and Node2). These three nodes also run haproxy and Kubernetes Control Plane clusters.
yum install keepalived systemctl enable keepalived
Set keepalived
vim /etc/keepalived/keepalived.conf
The configuration is as follows,
vrrp_instance VI_1 {<!-- --> stateMASTER interface ens160 virtual_router_id 51 priority 100 # priority advert_int 1 mcast_src_ip 192.168.238.130 #Local IP unicast_src_ip 192.168.238.130 #Local IP unicast_peer {<!-- --> 192.168.238.131, #The other two nodes 192.168.238.132 } nopreempt authentication {<!-- --> auth_type PASS auth_pass 1111 } virtual_ipaddress {<!-- --> 192.168.238.100/24 #VIP } track_script {<!-- --> check_apiserver } }
Create a heartbeat line detection script:
vim /etc/keepalived/check_apiserver.sh chmod +x /etc/keepalived/check_apiserver.sh
The script is as follows:
#!/bin/sh errorExit() {<!-- --> echo "*** $*" 1> & amp;2 exit 1 } curl --silent --max-time 2 --insecure https://localhost:4300/ -o /dev/null || errorExit "Error GET https://localhost:4300/" if ip addr | grep -q 192.168.238.100; then curl --silent --max-time 2 --insecure https://192.168.238.100:4300/ -o /dev/null || errorExit "Error GET https://192.168.238.100:4300/" fi
Start the keepalived service
systemctl start keepalived
haproxy settings
Install and start haproxy
yum install haproxy systemctl enable haproxy
Configure haproxy
vim /etc/haproxy/haproxy.cfg
The configuration content is as follows:
... #------------------------------------------------ ------------------- # apiserver frontend which proxies to the control plane nodes #------------------------------------------------ ------------------- frontend apiserver bind *:4300 mode tcp option tcplog default_backend apiserver ... #------------------------------------------------ ------------------- # round robin balancing for apiserver #------------------------------------------------ ------------------- backend apiserver option httpchk GET /healthz http-check expect status 200 mode tcp option ssl-hello-chk balance roundrobin #server ${HOST1_ID} ${HOST1_ADDRESS}:${APISERVER_SRC_PORT} check # [...] server Master 192.168.238.130:6443 check server Node1 192.168.238.131:6443 check server Node2 192.168.238.132:6443 check ...
System settings:
[root@Master ~]# echo net.ipv4.ip_nonlocal_bind=1 >> /etc/sysctl.d/haproxy-keepalived.conf [root@Master ~]# echo net.ipv4.ip_forward=1 >> /etc/sysctl.d/haproxy-keepalived.conf [root@Master ~]# cat /etc/sysctl.d/haproxy-keepalived.conf net.ipv4.ip_nonlocal_bind=1 net.ipv4.ip_forward=1 [root@Master ~]# [root@Master ~]# sysctl -p /etc/sysctl.d/haproxy-keepalived.conf net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 [root@Master ~]#
Start haproxy service
systemctl start haproxy
View virtual IP binding:
[root@Master ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:0c:29:9e:d5:84 brd ff:ff:ff:ff:ff:ff inet 192.168.238.130/24 brd 192.168.238.255 scope global dynamic noprefixroute ens160 valid_lft 1790sec preferred_lft 1790sec inet 192.168.238.100/24 scope global secondary ens160 valid_lft forever preferred_lft forever inet6 fe80::b52:173:210c:48d9/64 scope link noprefixroute valid_lft forever preferred_lft forever [root@Master ~]# ip a | grep 192.168.238.100 inet 192.168.238.100/24 scope global secondary ens160 [root@Master ~]#
Verify VIP and haproxy are working:
[root@Master ~]# curl --insecure https://localhost:4300 {<!-- --> "kind": "Status", "apiVersion": "v1", "metadata": {<!-- -->}, "status": "Failure", "message": "forbidden: User "system:anonymous" cannot get path "/"", "reason": "Forbidden", "details": {<!-- -->}, "code": 403 } [root@Master ~]# curl --insecure https://192.168.238.100:4300 {<!-- --> "kind": "Status", "apiVersion": "v1", "metadata": {<!-- -->}, "status": "Failure", "message": "forbidden: User "system:anonymous" cannot get path "/"", "reason": "Forbidden", "details": {<!-- -->}, "code": 403 }[root@Master ~]#
Enter https://192.168.238.100:4300/
in the browser to get the same content as above. If you enter http://192.168.238.100:4300/
, Client sent an HTTP request to an HTTPS server.
is returned.
Deploy Kubernetes control plane cluster
Based on the above configuration, use the following instructions
#The following instructions are executed on one of the three nodes, here on the Master kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint 192.168.238.100
Follow the prompts to run the following commands on the three nodes:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the certificate file to the remaining nodes, as follows:
scp /etc/kubernetes/pki/{<!-- -->ca.*,sa.*,front-proxy-ca.*} root@Node1:/etc/kubernetes/pki/ mkdir -p /etc/kubernetes/pki/etcd scp /etc/kubernetes/pki/etcd/ca.* root@Node1:/etc/kubernetes/pki/etcd
Before adding the other two Kubernetes Control Planes, you need to deploy the flannel network module:
kubectl apply -f kube-flannel.yml
The following instructions are executed on the remaining two nodes according to the prompts.
kubeadm join 192.168.238.100:4300 --token si5oek.mbrw418p8mr357qt --discovery-token-ca-cert-hash sha256:0e23eb637e09afc4c6dbb1f891409b314d5731e46fe33d84793ba2d5 8da006d6 --control-plane
Then install the remaining modules successfully, the results are as follows:
[root@Master ~]#kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master Ready control-plane 3h57m v1.27.3 192.168.238.130 <none> Red Hat Enterprise Linux 8.3 (Ootpa) 4.18.0-240.el8.x86_64 containerd://1.6.21 node1 Ready control-plane 3h53m v1.27.3 192.168.238.131 <none> Red Hat Enterprise Linux 8.3 (Ootpa) 4.18.0-240.el8.x86_64 containerd://1.6.21 node2 Ready control-plane 3h51m v1.27.3 192.168.238.132 <none> Red Hat Enterprise Linux 8.3 (Ootpa) 4.18.0-240.el8.x86_64 containerd://1.6.21 [root@Master ~]#
The core components here are as follows:
[root@Master ~]# kubectl get pods -o wide -A NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-flannel kube-flannel-ds-6n8rh 1/1 Running 0 3m11s 192.168.238.131 node1 <none> <none> kube-flannel kube-flannel-ds-c65rz 1/1 Running 0 80s 192.168.238.132 node2 <none> <none> kube-flannel kube-flannel-ds-ss8pw 1/1 Running 0 10m 192.168.238.130 master <none> <none> kube-system coredns-5d78c9869d-cw5cz 1/1 Running 0 17m 10.244.1.96 node1 <none> <none> kube-system coredns-5d78c9869d-z99r8 1/1 Running 0 17m 10.244.1.95 node1 <none> <none> kube-system etcd-master 1/1 Running 13 17m 192.168.238.130 master <none> <none> kube-system etcd-node1 1/1 Running 0 3m10s 192.168.238.131 node1 <none> <none> kube-system etcd-node2 1/1 Running 0 79s 192.168.238.132 node2 <none> <none> kube-system kube-apiserver-master 1/1 Running 11 17m 192.168.238.130 master <none> <none> kube-system kube-apiserver-node1 1/1 Running 1 3m11s 192.168.238.131 node1 <none> <none> kube-system kube-apiserver-node2 1/1 Running 2 79s 192.168.238.132 node2 <none> <none> kube-system kube-controller-manager-master 1/1 Running 20 (2m58s ago) 17m 192.168.238.130 master <none> <none> kube-system kube-controller-manager-node1 1/1 Running 1 3m11s 192.168.238.131 node1 <none> <none> kube-system kube-controller-manager-node2 1/1 Running 2 79s 192.168.238.132 node2 <none> <none> kube-system kube-proxy-87gfk 1/1 Running 0 17m 192.168.238.130 master <none> <none> kube-system kube-proxy-crjc4 1/1 Running 0 80s 192.168.238.132 node2 <none> <none> kube-system kube-proxy-mnl2d 1/1 Running 0 3m11s 192.168.238.131 node1 <none> <none> kube-system kube-scheduler-master 1/1 Running 20 (2m55s ago) 17m 192.168.238.130 master <none> <none> kube-system kube-scheduler-node1 1/1 Running 1 3m11s 192.168.238.131 node1 <none> <none> kube-system kube-scheduler-node2 1/1 Running 2 79s 192.168.238.132 node2 <none> <none> [root@Master ~]#
Reference
Github: High Availability Considerations
Kubernetes: Options for Highly Available Topology
Liu Da’s blog: haproxy + keepalived configuration example
Alibaba Cloud: Build a high-availability load balancer: haproxy + keepalived
HAproxy and keepAlived for Multiple Kubernetes Master Nodes
Create a Highly Available Kubernetes Cluster Using Keepalived and HAproxy
Install and Configure a Multi-Master HA Kubernetes Cluster with kubeadm, HAProxy and Keepalived on CentOS 7
CSDN: haproxy + keepalived to build a high-availability k8s cluster
CSDN: kubernetes learning-install haproxy and configure keepalived high availability
Create a highly available Kubernetes cluster using Keepalived and HAproxy
CSDN: HAProxy + Keepalived configuration architecture analysis
cnblog: Principles and features of haproxy + keepalived
Analysis of the coordination and cooperation principle between Keepalived and HaProxy
High availability Kubernetes cluster on bare metal – part 2